Researchers found they could stop a Tesla by flashing a few frames of a stop sign for less than half a second on an internet-connected billboard.

SAFETY CONCERNS OVER automated driver-assistance systems like Tesla’s usually focus on what the car can’t see, like the white side of a truck that one Tesla confused with a bright sky in 2016, leading to the death of a driver. But one group of researchers has been focused on what autonomous driving systems might see that a human driver doesn’t—including “phantom” objects and signs that aren’t really there, which could wreak havoc on the road.

Researchers at Israel’s Ben Gurion University of the Negev have spent the last two years experimenting with those “phantom” images to trick semi-autonomous driving systems. They previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

“The attacker just shines an image of something on the road or injects a few frames into a digital billboard, and the car will apply the brakes or possibly swerve, and that’s dangerous,” says Yisroel Mirsky, a researcher for Ben Gurion University and Georgia Tech who worked on the research, which will be presented next month at the ACM Computer and Communications Security conference. “The driver won’t even notice at all. So somebody’s car will just react, and they won’t understand why.”

In their first round of research, published earlier this year, the team projected images of human figures onto a road, as well as road signs onto trees and other surfaces. They found that at night, when the projections were visible, they could fool both a Tesla Model X running the HW2.5 Autopilot driver-assistance system—the most recent version available at the time, now the second-most-recent —and a Mobileye 630 device. They managed to make a Tesla stop for a phantom pedestrian that appeared for a fraction of a second, and tricked the Mobileye device into communicating the incorrect speed limit to the driver with a projected road sign.

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The Ben Gurion researchers are far from the first to demonstrate methods of spoofing inputs to a Tesla’s sensors. As early as 2016, one team of Chinese researchers demonstrated they could spoof and even hide objects from Tesla’s sensors using radio, sonic, and light-emitting equipment. More recently, another Chinese team found they could exploit Tesla’s lane-follow technology to trick a Tesla into changing lanes just by planting cheap stickers on a road.
But the Ben Gurion researchers point out that unlike those earlier methods, their projections and hacked billboard tricks don’t leave behind physical evidence. Breaking into a billboard in particular can be performed remotely, as plenty of hackers have previously demonstrated. The team speculates that the phantom attacks could be carried out as an extortion technique, as an act of terrorism, or for pure mischief. “Previous methods leave forensic evidence and require complicated preparation,” says Ben Gurion researcher Ben Nassi. “Phantom attacks can be done purely remotely, and they do not require any special expertise.”

Neither Mobileye nor Tesla responded to WIRED’s request for comment. But in an email to the researchers themselves last week, Tesla made a familiar argument that its Autopilot feature isn’t meant to be a fully autonomous driving system. “Autopilot is a driver assistance feature that is intended for use only with a fully attentive driver who has their hands on the wheel and is prepared to take over at any time,” reads Tesla’s response. The Ben Gurion researchers counter that Autopilot is used very differently in practice. “As we know, people use this feature as an autopilot and do not keep 100 percent attention on the road while using it,” writes Mirsky in an email. “Therefore, we must try to mitigate this threat to keep people safe, regardless of [Tesla’s] warnings.”

From: wired.com

Tesla does have a point, though not one that offers much consolation to its own drivers. Tesla’s Autopilot system depends largely on cameras and, to a lesser extent, radar, while more truly autonomous vehicles like those developed by Waymo, Uber, or GM-owned autonomous vehicle startup Cruise also integrate laser-based lidar, points out Charlie Miller, the lead autonomous vehicle security architect at Cruise. “Lidar would not have been susceptible to this type of attack,” says Miller. “You can change an image on a billboard and lidar doesn’t care, it’s measuring distance and velocity information. So these attacks wouldn’t have worked on most of the truly autonomous cars out there.”

The Ben Gurion researchers didn’t test their attacks against those other, more multi-sensor setups. But they did demonstrate ways to detect the phantoms they created even on a camera-based platform. They developed a system they call “Ghostbusters” that’s designed to take into account a collection of factors like depth, light, and the context around a perceived traffic sign, then weigh all those factors before deciding whether a road sign image is real. “It’s like a committee of experts getting together and deciding based on very different perspectives what this image is, whether it’s real or fake, and then making a collective decision,” says Mirsky. The result, the researchers say, could far more reliably defeat their phantom attacks, without perceptibly slowing down a camera-based autonomous driving system’s reactions.

Ben Gurion’s Nassi concedes that the Ghostbuster system isn’t perfect, and he argues that their phantom research shows the inherent difficulty in making autonomous driving decisions even with multiple sensors like a Tesla’s combined radar and camera. Tesla, he says, has taken a “better safe than sorry” approach that trusts the camera alone if it shows an obstacle or road sign ahead, leaving it vulnerable to their phantom attacks. But an alternative might disregard hazards if one or more of a vehicle’s sensors misses them. “If you implement a system that ignores phantoms if they’re not validated by other sensors, you will probably have some accidents,” says Nassi. “Mitigating phantoms comes with a price.”

Cruise’s Charlie Miller, who previously worked on autonomous vehicle security at Uber and Chinese self-driving car firm Didi Chuxing, counters that truly autonomous, lidar-enabled vehicles have in fact managed to solve that problem. “Attacks against sensor systems are interesting, but this isn’t a serious attack against the systems I’m familiar with,” such as Uber and Cruise vehicles, Miller says. But he still sees value in Ben Gurion’s work. “It’s something we need to think about and work on and plan for. These cars rely on their sensor inputs, and we need to make sure they’re trusted.”