conference paper

End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering

2021 IEEE Intelligent Vehicles Symposium (IV)

Publication Date

July 1, 2021

Author(s)

Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen, Qi Zhu

Abstract

In the development of advanced driver-assistance systems (ADAS) and autonomous vehicles, machine learning techniques that are based on deep neural networks (DNNs) have been widely used for vehicle perception. These techniques offer significant improvement on average perception accuracy over traditional methods, however have been shown to be susceptible to adversarial attacks, where small perturbations in the input may cause significant errors in the perception results and lead to system failure. Most prior works addressing such adversarial attacks focus only on the sensing and perception modules. In this work, we propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules. In particular, we choose a target ADAS application, the automated lane centering system in OpenPilot, quantify the perception uncertainty under adversarial attacks, and design a robust planning and control module accordingly based on the uncertainty analysis. We evaluate our proposed approach using both public dataset and production-grade autonomous driving simulator. The experiment results demonstrate that our approach can effectively mitigate the impact of adversarial attack and can achieve 55% 90% improvement over the original OpenPilot.

Suggested Citation
Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen and Qi Zhu (2021) “End-to-end Uncertainty-based Mitigation of Adversarial Attacks to Automated Lane Centering”, in 2021 IEEE Intelligent Vehicles Symposium (IV). 2021 IEEE Intelligent Vehicles Symposium (IV), pp. 266–273. Available at: 10.1109/IV48863.2021.9575549.