National Highway Traffic Safety Administration’s (NHTSA) Office of Defects Investigation (ODI) has begun a preliminary evaluation of a fatal highway crash involving a 2015 Tesla Model S operating with Autopilot activated. ODI is opening the preliminary evaluation (PE16007) to examine the design and performance of any automated driving systems in use at the time of the crash.
In a blog post, Tesla Motors was quick to point out that this is the first known fatality in more than 130 million miles driven with Autopilot activated. Tesla also pointed out that among all vehicles in the US, there is a fatality every 94 million miles; worldwide, there is a fatality approximately every 60 million miles.
According to Tesla, the Model S was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Autopilot apparently did not detect the white side of the trailer against a brightly lit sky; nor did the driver. As a result, no brake was applied.
The Model S passed under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.
We do this to ensure that every time the feature is used, it is used as safely as possible. As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert. Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving.—Tesla Motors
In October 2014, Tesla started equipping Model S with hardware to allow for the incremental introduction of self-driving technology: a forward radar; a forward-looking camera; 12 long-range ultrasonic sensors positioned to sense 16 feet around the car in every direction at all speeds; and a high-precision digitally-controlled electric assist braking system. Tesla has since relied upon software updates to increase Autopilot functionality.
In May, deep learning expert Andrew Ng, co-founder of Google’s Deep Learning project and currently Chief Scientist of Baidu (which is working on its own autonomous driving technology, earlier post), tweeted criticism of Tesla’s approach to rolling out Autopilot after reports of a different Autopilot accident.
It’s irresponsible to ship driving system that works 1,000 times and lulls false sense of safety, then… BAM! https://t.co/cbmc8onoKu— Andrew Ng (@AndrewYNg) May 27, 2016
Ng is also an associate professor in the Department of Computer Science and the Department of Electrical Engineering by courtesy at Stanford University.
The Autopilot fatality is evidence that federal regulators need to go slow as they write new guidelines for self-driving cars, said Consumer Watchdog.
The National Highway Traffic Safety Administration was expected to issue new guidelines for self-driving cars in July and Secretary of Transportation Anthony Foxx and NHTSA director Mark Rosekind have publicly pressed for the rapid deployment of the technology.
Consumer Watchdog said that NHTSA should conclude its investigation into the Tesla crash and publicly release those data and findings before moving forward with its guidance.
We hope this is a wake-up call to federal regulators that we still don’t know enough about the safety of self-driving cars to be rushing them to the road. The Administration should slow its rush to write guidelines until the causes in this crash are clear, and the manufacturers provide public evidence that self-driving cars are safe. If a car can’t tell the difference between a truck and the sky, the technology is in doubt.—Carmen Balber, executive director with Consumer Watchdog
Self-driving cars in California have shown a similar inability to handle many common road situations, according to Consumer Watchdog. Under California’s self-driving car testing requirements, companies were required to file “disengagement reports” explaining when a test driver had to take control. The reports show that the cars are not always capable of “seeing” pedestrians and cyclists, traffic lights, low-hanging branches, or the proximity of parked cars. The cars also are not capable of reacting to reckless behavior of others on the road quickly enough to avoid the consequences, the reports showed.
Over the 15-month reporting period a human driver was forced to take over a Google self-driving car 341 times, an average of 22.7 times a month. The cars’ technology failed 272 times and ceded control to the human driver, and the driver felt compelled to intervene and take control 69 times.
Consumer Watchdog has called on NHTSA to hold a public rulemaking on self-driving cars, and to require the cars to have a steering wheel and pedals to allow a human driver to take over when the technology fails.