NHTSA begins preliminary evaluation of Tesla Model S Autopilot fatality
01 July 2016
National Highway Traffic Safety Administration’s (NHTSA) Office of Defects Investigation (ODI) has begun a preliminary evaluation of a fatal highway crash involving a 2015 Tesla Model S operating with Autopilot activated. ODI is opening the preliminary evaluation (PE16007) to examine the design and performance of any automated driving systems in use at the time of the crash.
In a blog post, Tesla Motors was quick to point out that this is the first known fatality in more than 130 million miles driven with Autopilot activated. Tesla also pointed out that among all vehicles in the US, there is a fatality every 94 million miles; worldwide, there is a fatality approximately every 60 million miles.
According to Tesla, the Model S was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Autopilot apparently did not detect the white side of the trailer against a brightly lit sky; nor did the driver. As a result, no brake was applied.
The Model S passed under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.
We do this to ensure that every time the feature is used, it is used as safely as possible. As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert. Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving.
—Tesla Motors
In October 2014, Tesla started equipping Model S with hardware to allow for the incremental introduction of self-driving technology: a forward radar; a forward-looking camera; 12 long-range ultrasonic sensors positioned to sense 16 feet around the car in every direction at all speeds; and a high-precision digitally-controlled electric assist braking system. Tesla has since relied upon software updates to increase Autopilot functionality.
In May, deep learning expert Andrew Ng, co-founder of Google’s Deep Learning project and currently Chief Scientist of Baidu (which is working on its own autonomous driving technology, earlier post), tweeted criticism of Tesla’s approach to rolling out Autopilot after reports of a different Autopilot accident.
It’s irresponsible to ship driving system that works 1,000 times and lulls false sense of safety, then… BAM! https://t.co/cbmc8onoKu
— Andrew Ng (@AndrewYNg) May 27, 2016
Ng is also an associate professor in the Department of Computer Science and the Department of Electrical Engineering by courtesy at Stanford University.
The Autopilot fatality is evidence that federal regulators need to go slow as they write new guidelines for self-driving cars, said Consumer Watchdog.
The National Highway Traffic Safety Administration was expected to issue new guidelines for self-driving cars in July and Secretary of Transportation Anthony Foxx and NHTSA director Mark Rosekind have publicly pressed for the rapid deployment of the technology.
Consumer Watchdog said that NHTSA should conclude its investigation into the Tesla crash and publicly release those data and findings before moving forward with its guidance.
We hope this is a wake-up call to federal regulators that we still don’t know enough about the safety of self-driving cars to be rushing them to the road. The Administration should slow its rush to write guidelines until the causes in this crash are clear, and the manufacturers provide public evidence that self-driving cars are safe. If a car can’t tell the difference between a truck and the sky, the technology is in doubt.
—Carmen Balber, executive director with Consumer Watchdog
Self-driving cars in California have shown a similar inability to handle many common road situations, according to Consumer Watchdog. Under California’s self-driving car testing requirements, companies were required to file “disengagement reports” explaining when a test driver had to take control. The reports show that the cars are not always capable of “seeing” pedestrians and cyclists, traffic lights, low-hanging branches, or the proximity of parked cars. The cars also are not capable of reacting to reckless behavior of others on the road quickly enough to avoid the consequences, the reports showed.
Over the 15-month reporting period a human driver was forced to take over a Google self-driving car 341 times, an average of 22.7 times a month. The cars’ technology failed 272 times and ceded control to the human driver, and the driver felt compelled to intervene and take control 69 times.
Consumer Watchdog has called on NHTSA to hold a public rulemaking on self-driving cars, and to require the cars to have a steering wheel and pedals to allow a human driver to take over when the technology fails.
My issue with the very aggressive roll out of new technology is that they can do so much damage that they hinder its widespread introduction rather than speed it.
Tesla released self proclaimed beta software, itself remarkable for a safety critical feature, and carried out testing of this using wholly unqualified individuals at the risk of the general public on public highways.
In this instance their release of this capability also highlights the inadequacies of the sensors they have installed, as their explanation is that in the lighting conditions the car could not see properly.
That is what radar and other sensor systems are for, and why other manufacturers are holding off general release until they have a suite of sensors.
Irresponsibility catches up with you.
Posted by: Davemart | 01 July 2016 at 01:34 AM
The Verge has the story that the driver who was killed when driving in autopilot was known for making himself noticed on you tube by posting videos of close encounters with autopilot on. He filmed everything and deliberately bought himself into dangerous situation in order to have some interesting to post on you tube. See link to The Verge below and one of his deliberate reckless driving events. The good thing is that he died before he could kill anybody else with his lust for narcissism.
As always Davemart is criticising Tesla for whatever he can because he is angry he can’t afford to drive one himself and is being motivated by pure envy and vile towards those who can.
http://www.theverge.com/2016/6/30/12072634/tesla-autopilot-crash-autonomous-mode-viral-video
Posted by: Account Deleted | 01 July 2016 at 02:58 AM
It is inevitable that there will be a few fatalities along the way, especially as Tesla brought it out so fast.
This is an unusual corner case and we will get these every so often.
What is illustrated by this is that Tesla shouldn't just rely on camera based vision, they will have to add in 3d ranging (lidar) or radar. There are too many situations where camera based vision will fail or be confused, you just have to use a set of different sensors, especially 3d.
"Autopilot apparently did not detect the white side of the trailer against a brightly lit sky; nor did the driver. As a result, no brake was applied."
You won't catch this situation with 2d vision or with stereo based 3d vision as there are insufficient features on a white sided trailer. If you had a logo or some pattern on the trailer, you'd have seen it.
The problem with driving is that there are so many corner cases that a human can usually react to, but a machine cannot.
The solution in this case is to use active ranging sensors like lidar or radar.
It will all add to the cost and time of development.
Posted by: mahonj | 01 July 2016 at 03:08 AM
mahonj is correct. Tesla and others will learn from the inevitable mishaps. It is unfortunate to have a fatality, but I see fatalities on the news every night due to driver error. In my area, Denver, it is not unusual to have accidents around sunset when the sun is directly in your field of vision. That said, Davemart has a point that trying to roll out technology too quickly can interfere with consumer (and regulatory) acceptance.
Posted by: JMartin | 01 July 2016 at 09:28 AM
"among all vehicles in the US, there is a fatality every 94 million miles"
Not really a useful comparison to cars in the cost/feature class of the Tesla Model S and the Tesla driver demographic. This is actually a poor number.
Posted by: Herman | 01 July 2016 at 10:40 AM
Mahonj Tesla’s autopilot does have forward looking radar in addition to its camera vision. However the radar is looking down it does not look up in order not to have false braking events when driving under a bridge or a sign overhang. My bet is that Tesla’s next revamp of the autopilot sensors package that I expect to see in Model S and X in summer 2017 and also in the Model 3 for late 2017 will have the ability to look up with radar and do it in a way that can distinguish between a sign overhang or a trailer and a lot of other things as well like traffic lights, power and communication cables etc.
Herman one event does not make a statistic. You need a minimum of 50 events or the equivalent of 5 billion miles to make a rough conclusion about death statistics with autopilot. That will take years and meanwhile the auto pilot will have improved drastically so the basis for making the statistics needs to be redone. As long as the autopilot is only seeing 1 death per 100 million miles there is no reason for alarm. The goal is of cause to make a car that has less than 1 death per 1 billion miles and that will not allow you to crash it and kill yourself even if you want to. I am not against people killing themselves but it should not be done in a car in a way that bring other people in danger and create a mess to be cleaned up afterward.
Tesla has hundreds of airbag deployments on record with and without autopilot and that statistic clearly shows that when autopilot is on the change of accident per mile measured by airbag deployment is significantly less than when the autopilot is not activated. So Tesla already has solid documentation that their autopilot makes their cars safer to drive.
Posted by: Account Deleted | 02 July 2016 at 12:29 AM
IMO the way to start introducing autopilot is selecting spead under which autopilot could be swithed on i.e. city driving. In that case unnecasary application of brakes would be not so anoying or dangerous. As long as autopilot becomes more reliable the spead limit could be increased. Autopilot certification could be performed in similar maner based on spead intervals.
Posted by: Darius | 02 July 2016 at 02:56 AM
Henrik said:
'As always Davemart is criticising Tesla for whatever he can because he is angry he can’t afford to drive one himself and is being motivated by pure envy and vile towards those who can.'
Thank you for making it clear to all the impeccable logic behind your postings.
Any bile would appear to be yours, but that is to be expected from those so immature that they have not grown out of hero worship.
Posted by: Davemart | 02 July 2016 at 02:57 AM
Herman said:
'"among all vehicles in the US, there is a fatality every 94 million miles"
Not really a useful comparison to cars in the cost/feature class of the Tesla Model S and the Tesla driver demographic. This is actually a poor number.
Indeed.
In their efforts to spin Tesla have opened a can of worms.
It is misleading to generalise from a single incident, but since that is the statistical game Tesla have chosen to play:
'"130 million miles used before a fatality as opposed to every 94 million miles driven with regular drivers."
This statistic is deliberately misleading. The actual statistic for driver-only fatalities, cars 4 years old or newer, "driven by regular drivers," is about 1 fatality in 430 million miles. For big German built sedans the number is even better.
Tesla is doing exactly what they did rationalizing the rate of fires in the Model S, comparing a vehicle/driver population which includes all cars on the road, including those so old they don't have airbags, those with bald tires, and those driven by people with substance abuse problems to a new large sedan with the latest airbag technology, driven by middle aged professionals.
***
@ZackyB: "which means that on Autopilot, Teslas are still statistically safer than regular vehicles."
That's not true. You have to compare car type and driver demographic, and when you do, you find 1 fatality per 130 million miles is not actually very good.
For example, from the IIHS database, the 2008 to 2011 Lexus ES 350 accumulated 528,000 registered vehicle years and a fatality rate of 9 per million RVY. If you assume the average mileage per year was 12,000, then there was approximately 1 fatality per 1,330 million miles driven, or 10 times better than the Model S driven under Autopilot.
Tesla is deliberately muddying the water here by comparing statistics which include older less safe, smaller, poorer maintained vehicles driven by problematic driver demographics to its fleet of nearly new large luxury sedans.'
(Bubslug, Seeking Alpha)
Posted by: Davemart | 02 July 2016 at 03:33 AM
Globally nearly 1.3 million people die in road crashes each year, with an additional 20-50 million injured or disabled.
So if you rant about Tesla's one fatality please discuss it within this larger perspective.
We can't wait for perfection - this type of automation will quickly and dramatically reduce fatalities and injuries, and I applaud Tesla for having the balls to try and get there quickly.
Posted by: Juan Valdez | 02 July 2016 at 09:12 AM
It's being reported that the driver may have been watching a movie on a separate DVD player found in the vehicle. This, however, is only a distraction from the bigger issue here.
Even if we agree that total accidents will decline with self-driving cars, some number of novel accident types will arise. Liability for many of the scenarios is not settled, and in some cases not at all obvious.
This is either going to be settled by
a)legislation or
b)caselaw.
No approach is apt to be fully satisfactory, but b) may be expected to be completely unpredictable, glacially slower, inconsistent state to state until each case reaches SCOTUS, and apportion insurance premium burden randomly.
Posted by: Bob Niland | 02 July 2016 at 12:54 PM
The core issue, one that I've warned about repeatedly here and other forums, is driver complacency. That's hard to test with professional drivers. They're less likely to get complacent, or to do stupid things like play games while driving.
If Tesla autopilot drivers get lulled into a state of complacency, they will drive until they encounter a situation they can not recover from. Then an accident will occur. That might take a lot of trips, but it will become somewhat inevitable, like eating sugar until you get diabetes.
That endangers not just the Tesla driver, but everyone who shares the roadway
Complacency is the mild version of what's happening out there. Take a look at this couple playing games while "driving" a Tesla on autopilot.
https://www.facebook.com/NowThisFuture/videos/1157723037602208/
Posted by: electric-car-insider.com | 02 July 2016 at 07:44 PM
@eci, you have to have 1 person driving the car, either the car itself, or the driver. Tesla's current situation is a fudge, they are pretending that the driver is in control, but realistically, he isn't - the time taken to asses a situation and take over is probably > the time to impact.
Anyway, you have to get to the situation where the car is fully in control.
@Henrik, having a sensor is of limited use if it is not being used at certain angles, etc. However, trying to interpret every scene is a hard problem, especially when you don't want the system slamming the brakes on every time a seagull flies by as this will cause the car to get rear ended. Usually, a rear ending is the fault of the guy behind, but if an AV brakes suddenly for no obvious reason, it is his fault.
The algorithm for threat detection probably has the implicit assumption that "anything that will harm me is on the road", which unfortunately is not true for the middle of a semi truck.
Anyway, i'm sure they will be able to fix this case with a s/w update.
The problem for all automatic cars is that every time they kill someone, people will point the finger, while the number of lives they save will just be a number, which lacks the impact of a dead body.
Posted by: mahonj | 03 July 2016 at 01:10 AM
It seems obvious that better sensors are required to improve safety. It will certainly be the case on sunny days (specially in early mornings and evenings) with snow covered grounds and blowing snow.
Posted by: HarveyD | 04 July 2016 at 03:50 PM
@mahonj Agree completely. I believe it will take decades to solve enough of the edge cases to make full autonomy viable.
As Harvey points out, the variability as something as common as weather conditions is highly problematic. Tesla AP needs to "see" lane markings and fails when it can't.
Posted by: electric-car-insider.com | 05 July 2016 at 08:22 AM
E.c.i. it will only take 2 or 3 more years to make a fully driverless vehicle that can go everywhere a human can and do it significantly safer say with a 20% reduction in the accident probability. However, it could take as long as to 2030 before the accident probability of fully driverless vehicles is reduced to just 10% of what it is when human drivers are in control.
Posted by: Account Deleted | 05 July 2016 at 11:42 PM
Henrik, you are optimistic on the timeframe by at least an order of magnitude.
The validation testing on such a system alone - the validation testing that Tesla should have done! - will take a few years, and that's assuming you don't run into a show-stopper in the process of doing so.
These systems do not work properly in adverse weather conditions or in traffic circumstances that are out of the norm. Tough one: Police officer directing traffic. Tougher one: Civilian directing traffic because the police haven't arrived yet. Tougher yet: Kids up to no good and waving their arms.
The crash in Florida happened at an intersection with visibility conditions that could hardly have been better. And it still happened because a circumstance developed that the system could not deal with - one that a human could have easily dealt with, IF that human had been paying attention instead of watching a movie ...
Posted by: Brian Petersen | 06 July 2016 at 05:19 AM
I read somewhere the visibility conditions were far from ideal with a low hanging sun blinding both the driver and the front looking camera and a white painted truck. The autopilot could not see it and the driver might not have seen it either even if he had not watched a video something we do not yet know he did for sure when the accident happened. Tesla is not at fault and the driver could possibly not be at fault either although I guess he was either because he watched a video or because he deliberately put the car in this situation in order to provoke a situation that was worth posting on You Tube. 1.3 million traffic deaths happens every year this one is hardly interesting in that perspective. Give me the statistics of 50 to100 deaths with Tesla’s autopilot enabled and we can start to make some scientifically based conclusions. So far the airbag deployment statistics supports that driving with autopilot is much safer than not driving with autopilot.
Posted by: Account Deleted | 06 July 2016 at 05:47 AM
... and a sun low in the sky is not a foreseeable situation - in fact something that everyone has to deal with every day?
... and their radar / ultrasound systems which ought to be immune to lighting conditions didn't work either?
Tesla's autopilot systems have not had sufficient validation testing. Period. Other manufacturers have the same sensing technology available to them but they take greater measures to encourage the driver to remain in control of the vehicle. There's a reason for that. It's not ready for prime time.
I know Tesla claims this is the first fatality in however million miles of Autopilot operation. Sure. Autopilot operation is in mostly low-risk scenarios - late model vehicles operated by wealthy-demographic owners (not teenagers) on motorways in good weather, etc. Doesn't work in adverse weather. Doesn't work in intersections (following stop signs or traffic signals). You know, situations where collisions actually occur ... 100 million miles of steady speed motorway driving in good weather is not comparable to 100 million miles of mixture of that plus urban plus construction zones plus rain plus snow and all the other stuff that humans have to deal with.
Posted by: Brian Petersen | 06 July 2016 at 08:48 AM
@harvey, you are right about the snow, wet lines, etc: the system needs to be able to handle these situations, and it won't be able to do it with 2 or 3d vision, it will need very accurate maps (2-3cm accuracy) but many people are working on this (especially google).
@Henrik, I thin the 2-3 years is a bit optimistic for all roads, but there will be some (many) roads that AV systems will be able to use. (It might take a while to drive in Naples or Delhi).
An interesting question is when do you allow AV cars on the roads and what do you do when they inevitably crash?
IMO the only way to really test them is to put them out there and see what happens. I think Tesla are both reckless and doing quite well and everyone else is sitting back and watching what is happening.
Tesla have the advantage of a large fanbase who are happy to "drive" beta level AVs and gather loads of test cases for Tesla. If you don't do this, you will need decades of testing. I think this will give Tesla a huge advantage.
What the authorities could do to allow Tesla to continue testing would be to force them to give their test data set (in some form) to the other manufacturers so they could all benefit from it. The same could be made apply to all manufacturers.
That would accelerate the move to AVs and give more consistent responses to threats.
Thus, you would only have to see each corner case once.
Posted by: mahonj | 07 July 2016 at 12:39 AM