Self-driving vehicles could be Waymo trouble than they’re worth

Louis Stone, Reporter

February 23, 2021

4 Min Read

Waymo trouble than they’re worth?

Autonomous vehicles may reduce the number of deaths and injuries caused by human drivers, but they also open up entirely new avenues for cyber attacks, the EU has warned.

A report by the European Union Agency for Cybersecurity (ENISA) and the European Commission's Joint Research Centre (JRA) detailed the risks of putting a machine learning-based system in charge of a large metal object hurtling through populated areas.

The agencies described a number of potential attacks, some of which could be carried out by jamming or confusing self-driving car sensors, as well as those that would require sophisticated supply chain hacks to change the vehicle’s on-board software.

Turning a car into a computer

The report titled ‘Cybersecurity Challenges in the Uptake of Artificial Intelligence in Autonomous Driving’ looked at both semi-autonomous and fully-autonomous vehicles, both of which are a potential threat to passengers and pedestrians.

“Sensors may be blinded or jammed,” the report noted. “In this way, the attacker may manipulate the AI model, feed the algorithm with erroneous data or intentionally provide scarce data and thus diminishing the effectiveness of automated decision-making.”

Equally, the car could be subjected to a DDoS-style attack, since “disrupting the communication channels available to an AV makes it essentially blind to the outside world.”

Stickers, alterations, or carefully curated light patterns could confuse an AI-based system, both impacting it immediately, and polluting the larger data pool that the vehicle contributes to.

Another approach attackers could take is to hijack and manipulate a vehicle’s communications channels, giving it incorrect data from road infrastructure or GPS data – often referred to as spoofing.

In one potential attack scenario, the report envisions a world in which “an adversary discovers a remotely exploitable vulnerability in the vehicle’s head unit (HU). The attacker exploits this vulnerability over Internet to compromise remotely the HU of vulnerable vehicles. Once inside the HU, the attacker performs lateral movements gaining access to the in-vehicle network.”

After working their way deeper into the vehicle to take control, the attacker may consider “replacing a braking command emitted when a stop sign is detected, by an acceleration command.”

An even more extreme hack would see an adversary discover a vulnerability and deploy malicious firmware from backend servers. “Malicious OTA (Over-the-air) updates of the AI models could then be executed so that AVs think it is a legitimate one, as it is initiated from a trusted server. The attack might be used to make the AI “blind” for pedestrians, by manipulating for instance the image recognition component in order to misclassify pedestrians.

“This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks. Given that such OTA updates are being pushed at scale to the entire fleet of vehicles of [a] particular model/brand, it is easy to envisage that the scenario involving the entire fleet may have detrimental safety impact.”

The agencies warned that autonomous vehicle developers currently lack sufficient security knowledge, and that the matter is not a priority for the industry.

They recommended that businesses take a more proactive approach, and conduct regular systematic risk assessments and proactive monitoring.

They also called for audit processes and forensic analysis after incidents, AI security policies across the supply chain, and proper governance.

The report noted that while AI developers, vehicle manufacturers, and components suppliers all need to do more, so does the government. Policy-makers, regulatory bodies, national authorities, and standardization bodies are all highlighted as groups that need to be aware of the risks of AI-controlled vehicles running rampant.

“When an insecure autonomous vehicle crosses the border of an EU Member State, so do its vulnerabilities," Juhan Lepassaar, executive director at the EU Agency for Cybersecurity, said.

"Security should not come as an afterthought, but should instead be a prerequisite for the trustworthy and reliable deployment of vehicles on Europe’s roads."

Stephen Quest, director-general at JRC, added: “It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks.

"To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving.”

About the Author(s)

Louis Stone

Reporter

Louis Stone is a freelance reporter covering artificial intelligence, surveillance tech, and international trade issues.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like