conference paper

Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization

Proceedings of the 38th Annual Computer Security Applications Conference

Publication Date

December 5, 2022

Author(s)

Qijin Zhang, Junjie Shen, Mingtian Tan, Zhe Zhou, Zhou Li, Qi Alfred Chen, Michael Zhang

Abstract

The security of the Autonomous Driving (AD) system has been gaining researchers’ and public’s attention recently. Given that AD companies have invested a huge amount of resources in developing their AD models, e.g., localization models, these models, especially their parameters, are important intellectual property and deserve strong protection. In this work, we examine whether the confidentiality of production-grade Multi-Sensor Fusion (MSF) models, in particular, Error-State Kalman Filter (ESKF), can be stolen from an outside adversary. We propose a new model extraction attack called TaskMaster that can infer the secret ESKF parameters under black-box assumption. In essence, TaskMaster trains a substitutional ESKF model to recover the parameters, by observing the input and output to the targeted AD system. To precisely recover the parameters, we combine a set of techniques, like gradient-based optimization, search-space reduction and multi-stage optimization. The evaluation result on real-world vehicle sensor dataset shows that TaskMaster is practical. For example, with 25 seconds AD sensor data for training, the substitutional ESKF model reaches centimeter-level accuracy, comparing with the ground-truth model.

Suggested Citation
Qifan Zhang, Junjie Shen, Mingtian Tan, Zhe Zhou, Zhou Li, Qi Alfred Chen and Haipeng Zhang (2022) “Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization”, in Proceedings of the 38th Annual Computer Security Applications Conference. New York, NY, USA: Association for Computing Machinery (ACSAC '22), pp. 56–70. Available at: 10.1145/3564625.3567977.