While automakers focus on defending the systems in their cars against hackers, there may be other ways for the malicious to mess with self-driving cars. Security researchers at the University of Washington have shown they can get computer vision systems to misidentify road signs using nothing more than stickers made on a home printer.
UW computer-security researcher Yoshi Kohno described an attack algorithm that uses printed images stuck on road signs. These images confuse the cameras on which most self-driving vehicles rely. In one example, explained in a document uploaded to the open-source scientific-paper site arXiv last week, small stickers attached to a standard stop sign caused a vision system to misidentify it as a Speed Limit 45 sign.
The vision systems in autonomous cars typically have an object detector and a classifier: the former spots pedestrians, lights, signs, and other vehicles, and the latter decides what the object is and what the signs are saying. The attacks Kohno described assume that hackers are able to gain access to this classifier and then, using its algorithm and a photo of the target road sign, generate a customized image.
The attack relies on the vulnerability of deep neural networks that have been trained to recognize signs, stoplights, and other road users using images from cameras mounted on self-driving vehicles. These systems can be sensitive to malicious perturbations—small, precisely crafted changes to their inputs—that can cause them to misbehave in unexpected and potentially dangerous ways.
Researchers have long known that tinkering with what a computer sees can lead to incorrect results. But previous attacks involved changes that were either too extreme—and thus obvious to human drivers—or too subtle, only working from a particular angle or at a certain distance.
In this example, researchers printed out a true-size image similar to the Right Turn sign and overlaid it on top of the existing sign. Subtle differences cause this to be read as a Speed Limit 45 sign.
The algorithms created by Kohno and colleagues at the University of Michigan, Stony Brook University, and the University of California are designed to be printed on a normal color printer and stuck to existing road signs. One attack prints a full-size road sign to be overlaid on an existing sign. In this example, the team was able to create a stop sign that just looks splotchy or faded to human eyes but that was consistently classified by a computer vision system as a Speed Limit 45 sign.
A second exploit used small, rectangular black-and-white stickers that, when attached to another stop sign, also caused the computer to see it as a Speed Limit 45 sign. The attacks were successful at a variety of distances, from close up to 40 feet away, and at a range of angles.
Using an attack disguised as graffiti, researchers were able to get computer vision systems to misclassify stop signs at a 73.3 percent rate, causing them to be interpreted as Speed Limit 45 signs.
“We [think] that given the similar appearance of warning signs, small perturbations are sufficient to confuse the classifier,” wrote Kohno and his colleagues. “In future work, we plan to explore this hypothesis with targeted classification attacks on other warning signs.”
The dangers of such attacks are clear. Many experimental self-driving cars and some production vehicles, including Tesla’s entire range of electric cars, can already automatically recognize road signs. If a future self-driving vehicle could be tricked into responding incorrectly to a sign, it could be made to blow through a stop sign or slam on its brakes in the fast lane.
“
“Over time and with advancements in technology, they could become easier to replicate and adapt for malicious use.”
– Tarek El-Gaaly, Voyage
”
“Attacks like this are definitely a cause for concern in the self-driving-vehicle community,” said Tarek El-Gaaly, senior research scientist at Voyage, an autonomous-vehicle startup. “Their impact on autonomous driving systems has yet to be ascertained, but over time and with advancements in technology, they could become easier to replicate and adapt for malicious use.”
Even if classifiers differ significantly among manufacturers, hackers might still be able to reverse-engineer them, Kohno thinks. “By probing the system, attackers can usually figure out a similar surrogate model based on feedback, even without access to the actual model itself,” he wrote. There is also a trend for carmakers to use industry-standard systems from providers like Mobileye and even the first signs of open-source self-driving-car technology from Comma.ai and Baidu.
Ultimately, said El-Gaaly, carmakers will have to use a combination of defenses to foil hackers. “Many of these attacks can be overcome using contextual information from maps and the perceived environment,” he said. “For example, a ‘65 mph’ sign on an urban road or a stop sign on a highway would not make sense. In addition, many self-driving vehicles today are equipped with multiple sensors, so failsafes can be built in using multiple cameras and lidar sensors.”
from Car and Driver BlogCar and Driver Blog http://ift.tt/2ub5boC
via IFTTT
0 comments:
Post a Comment