UPDATED. US carmaker Tesla has been slammed by the US road safety board after confirming details of a fatal crash involving one of its vehicles last week.
Walter (Wei) Huang, the driver of a Tesla Model X SUV, was killed on March 23 when his car hit a concrete barrier on Highway 101, which connects San Francisco with Silicon Valley. He was reportedly on his way to work at Apple.
Huang’s brother Will told ABC7 news that Walter had complained “Seven to 10 times that the car would swivel toward that same exact barrier during autopilot. Walter took it into dealership addressing the issue, but they couldn’t duplicate it there.”
Tesla has admitted that the car was under software control, using the company’s Autopilot technology, when it hit the barrier. Huang’s hands were not on the wheel – as they should have been, according to Tesla’s guidance – when the accident occurred.
In a blog post on its website, Tesla said: “In the moments before the collision, which occurred at 9.27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum.
“The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 metres of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
“The reason this crash was so severe is because the crash attenuator, a highway safety barrier which is designed to reduce the impact into a concrete lane divider, had been crushed in a prior accident without being replaced. We have never seen this level of damage to a Model X in any other crash.”
However, the US National Transportation Safety Board (NTSB) has slammed Tesla for releasing this information without alerting the agency beforehand, as it was required to do in a signed agreement.
The NTSB, which is still investigating the accident, said, “We take each unauthorised release seriously. However, this will not hinder our investigation.”
Increased safety
Tesla has been swift to defend the safety record of its technologies after the incident, which saw the value of its shares plunge in a sell-off.
“Over a year ago, our first iteration of Autopilot was found by the US government to reduce crash rates by as much as 40 percent. Internal data confirms that recent updates to Autopilot have improved system reliability.
“In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.
“Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians, and cyclists.”
The fatality occurred just one week after a pedestrian was killed by an autonomous Uber test vehicle in Tempe, Arizona, despite the presence of a safety driver onboard.
However, it is not the first death involving a Tesla vehicle running on Autopilot. Two years ago, a driver was killed when a Tesla Model S drove into the side of a truck. It was reported that the driver may have been watching a Harry Potter movie at the time of the accident.
Tesla has previously been criticised for talking about the safety of its technologies after serious accidents or fatalities. It addressed this point in its blog, saying: “In the past, when we have brought up statistical safety points, we have been criticised for doing so, implying that we lack empathy for the tragedy that just occurred. Nothing could be further from the truth.
“We care deeply for, and feel indebted to, those who chose to put their trust in us. However, we must also care about people now and in the future whose lives may be saved if they know that Autopilot improves safety. None of this changes how devastating an event like this is or how much we feel for our customer’s family and friends. We are incredibly sorry for their loss.”
Read more: Uber: Self-driving cars ordered off road by US, sells to Grab
Read more: Toyota halts autonomous car tests after Uber accident
Internet of Business says
This latest fatality puts US regulators in a difficult position. While Arizona authorities took Uber’s self-driving test cars off the road after a pedestrian was killed by one during an autonomous test last month, this latest fatality involves a technology, Autopilot, that is already built into production models.
The accident reveals the core problem with either driverless or driver-assistance technologies at present: in the two most recent fatalities, the general thrust of arguments has been to imply that the human drivers were at fault for either not looking at the road or not having their hands on the wheel.
Tesla has been quick to point out – in a written submission to Internet of Business which failed to mention either the accident or the dead driver – that Autopilot is a Level 2 system designed for driver assistance, and should not be described as “autonomous”. In a strict technical sense this is true; Autopilot does not offer Level 4 autonomy.
But under National Highway Traffic Safety Administration (NHTSA) definitions of levels of vehicle automation, Level 2 systems are capable of steering, braking, and accelerating independently, and when these systems are engaged on the highway, drivers trust them to do exactly that, whether they are wise to do so or not.
After all, the standard English definition of ‘autonomy’ is the ability to act independently. Any company that calls its system ‘Autopilot’ must therefore take responsibility when its technology fails – especially when it is designed to keep drivers safe. To reject any association with the word ‘autonomous’, as Tesla did in an email to Internet of Business, while promoting a product called ‘Autopilot’ is surely double standards.
Tesla should now drop the misleading Autopilot name in the interests of future driver safety, and call its system ‘Driver Assistance’, if it wishes to avoid all association with words like ‘autonomous’ or ‘automated’ (presumably for legal reasons).
Waymo CEO John Krafcik recently explained what he sees as the key difference in the technology’s application. “Tesla has driver-assist technology and that’s very different from our approach. If there’s an accident in a Tesla, the human in the driver’s seat is ultimately responsible for paying attention.”
Nevertheless, in both recent fatalities, technologies, not people, were driving the cars, regardless of whether their human drivers should have been watching the road or holding the wheel.
This fact cannot be ignored, and it is a problem that may become endemic in both semi-autonomous and driver-assisted vehicles: people trust the technology to look after them, and as a direct result of that they disengage from their own responsibility to look after themselves.
The core question, then, is simple: should the developers of a technology that is still in its infancy seek to blame human drivers for every crash or death? Questions like this will become increasingly commonplace as AI-enabled, autonomous, and/or assisted systems become more dominant in our lives, calling into question longstanding legal concepts, such as liability, and ethical concepts, such as responsibility.
The subtext, therefore, is all about trust: human drivers need to trust autonomous or smart technologies, but doing so makes them focus on things other than the road. To suggest that human drivers should concentrate on the road and the wheel while their vehicles are under software control is tantamount to suggesting that they shouldn’t trust the technology. Carmakers can’t have it both ways.
As we move towards completely autonomous systems, including driverless trucks and hands-free road vehicles that are designed purely for passengers, the law urgently needs to catch up.
Read more: New Baidu, Jaguar Land Rover driverless cars take to the road
Read more: Waymo turns the ignition on self-driving trucks
Read more: Fetch launches world’s first autonomous AI smart ledger
Read more: Pure Storage, NVIDIA launch enterprise AI supercomputer in a box
Read more: AI regulation & ethics: How to build more human-focused AI
Read more: Cambridge Analytica vs Facebook: Why AI laws are inadequate