EU should target 1M EV public chargers by 2024, 3M by 2029 say carmakers, environmentalists and consumer groups
RareX executes MoU with global rare earths producer Shenghe Resources

JRC, ENISA report examines cybersecurity challenges in uptake of AI in autonomous driving

A new report by the EC’s Joint Research Center (JRC) and the European Union Agency for Cybersecurity (ENISA) examines cybersecurity risks connected to artificial intelligence (AI) in autonomous vehicles and provides recommendations for mitigating them.

By removing the most common cause of traffic accidents—the human driver—autonomous vehicles are expected to reduce traffic accidents and fatalities. However, they may pose a completely different type of risk to drivers, passengers and pedestrians.

The uptake of AI in autonomous driving brings about important cybersecurity concerns. The increased digitalization of vehicles and the inclusion of AI functionalities result in a larger attack surface and might significantly increase the incentives for attackers to target AVs. Cyberattacks against AVs do not only concern the particularities related to AI, but also include the security of the underlying digital infrastructure and related digital systems. It is thus crucial to evolve existing security processes and practices to consider this increased uptake of AI technologies and digitalization in vehicles, particularly in the context of autonomous driving.

—Dede et al.

Autonomous vehicles use artificial intelligence systems, which employ machine-learning techniques to collect, analyze and transfer data, in order to make decisions that in conventional cars are taken by humans. These systems, like all IT systems, are vulnerable to attacks that could compromise the proper functioning of the vehicle.

It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks. To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving.

—JRC Director-General Stephen Quest

When an insecure autonomous vehicle crosses the border of an EU Member State, so do its vulnerabilities. Security should not come as an afterthought, but should instead be a prerequisite for the trustworthy and reliable deployment of vehicles on Europe’s roads.

—EU Agency for Cybersecurity Executive Director Juhan Lepassaar

Vulnerabilities of AI in autonomous cars. Threats related to AI can be divided into two groups: intentional and unintentional.

  • Intentional threats include those coming from a malevolent exploitation of the limitations and vulnerabilities present in AI and machine-learning (ML) methods to cause intended harm. Intentional misuse of AI leads to change of the current cybersecurity landscape by introducing a new class of vulnerabilities and raising the ceiling of potential impacts, the report says.

    The growing use of AI to automate decision-making in a diversity of sectors exposes digital systems to cyberattacks that can take advantage of the flaws and vulnerabilities of AI and ML methods. Since AI systems tend to be involved in high-stake decisions, successful cyberattacks against them can have serious impacts. AI can also act as an enabler for cybercriminals: Cybercriminals can use AI to automate aspects of their attacks, enabling them to launch attacks more quickly, at a greater scale and a lower cost and with higher precision.

  • Unintentional threats come as side effects of benevolent usages, due to open issues inherent in the trustworthiness, robustness, limitations and safety of current AI and ML methods. Unintentional threats comprise unpredictable malfunctioning, failures or negative aftermaths caused by shortcomings, poor design and/or inner peculiarities of AI and ML.

The report focuses on the exploitation of AI vulnerabilities to compromise the integrity and availability of AVs—i.e., intentional threats. In particular, adversarial ML is discussed, as a prominent field of research linked to cybersecurity of AI, and as an immediate threat for AVs.

Jrc

Illustration of an adversarial example using the Basic Iterative Method. The classifier used is Inceptionv3. The image comes from the validation set of the ImageNet dataset. (Left) Original image, correctly classified as a school bus. (Middle) Perturbation added to the image, with a 10x amplification. (Right) Adversarial example, wrongly classified with high confidence. Dede et al.


The report authors present five hypothetical attack scenarios to illustrate the exploitation of AI vulnerabilities in an automotive context using both classical cybersecurity and AI-specific vulnerabilities. The scenarios are:

  1. Adversarial perturbation against image processing models for street sign recognition and lane detection.

  2. Man-in-the-middle attack on the planning module.

  3. Data poisoning attack on stop sign detection.

  4. Attack related to large-scale deployment of a rogue firmware after hacking OEM back-end servers.

  5. Attack related to sensor/communication jamming and GNSS spoofing.

Recommendations for more secure AI in autonomous vehicles. In order to improve the AI security in autonomous vehicles, the report offers a number of recommendations in the following categories:

  • Systematic security validation of AI models and data;

  • Supply chain challenges related to AI cybersecurity;

  • End-to-end holistic approach for integrating AI cybersecurity with traditional cybersecurity principles;

  • Incident handling and vulnerability discovery related to AI and lessons learned; and

  • Limited capacity and expertise on AI cybersecurity in the automotive industry.

Resources

  • Dede, G., Hamon, R., Junklewitz, H., Naydenov, R., Malatras, A. and Sanchez, I., “Cybersecurity challenges in the uptake of artificial intelligence in autonomous driving,” EUR 30568 EN, Publications Office of the European Union, Luxembourg, 2021, ISBN 978-92-76-28646-2, doi: 10.2760/551271

Comments

The comments to this entry are closed.