The International Transport Forum (ITF) has published a new report titled, Safer Roads with Autonomous Vehicles?, and the question mark is important. Starting with a healthy dose of scepticism, the ITF says claims that driverless cars will reduce fatalities on the road by up to 90 percent through the removal of driver error are untested and reductionist.
The report is particularly timely given the recent fall in consumer trust in driverless technology in the US.
Would eliminating driver error lead to safer roads?
There are two points raised by the report regarding human error and automation. The first is that the role of human error is overstated as a cause of car accidents.
Of course, autonomous systems don’t get tired, drive under the influence of drugs or alcohol, or get distracted by phone calls. But the oft-quoted statistic that 90 percent of accidents are caused by human error relies in part on post-crash analysis that can’t always be trusted, says the ITF.
“It is reductionist to believe that human error has been properly identified as a contributory factor by those responsible for post-crash forensic investigation, or that all crashes involving human error could have been otherwise avoided by addressing that error,” says the report.
“These considerations do not likely impact the general finding that automation may contribute to significantly better safety outcomes, but it may temper the assessment of automation benefits versus disbenefits,” it adds.
The second point made in the ITF report is that uncertainty remains over how effective autonomous systems will be in reducing fatalities on the road.
The reasoning is simple. The notion that driverless cars will eventually prevent deaths will probably turn out to be true – certainly in the unlikely event that all transport is automated. But that claim makes a number of assumptions about the relationship between the car, the driver, other vehicles, and infrastructure.
The risk of the middle ground
The journey to full automation may well turn out to be the most dangerous part of the process, according to the ITF. For example, autonomous systems that still require an element of human involvement and operate on the basis of shared responsibility could lead to roads becoming far more dangerous.
“Vehicle automation strategies that keep humans involved in the driving task seem risky,” it says. “A shared responsibility for driving among both automated systems and humans may not render decision making simpler, but more complex. Thus, the risk of unintended consequences that would make driving less safe, not more, could increase.”
But the elephant in the room is that fully-autonomous systems aren’t ready yet, and few are capable of navigating outside well-mapped city limits. And initial testing with shared responsibility between driver and vehicle is both necessary and required for manufacturers and regulators.
This catch 22 has been clear to see in recent months. The fatal accidents involving Uber and Tesla vehicles have highlighted issues of shared responsibility: neither the drivers involved or the underlying technology under development can be fully exonerated.
Put another way, human beings begin to trust the technology, and therefore begin to let go of their responsibility to look after themselves, their passengers, or others, resulting in a Tesla owner losing his life against a concrete barrier in California, and a woman being mowed down by an Uber Volvo with a safety driver onboard.
Former BMW chief executive Olaf Kastner hit the nail on the head earlier this year when he said, “The system won’t work perfectly until all vehicles on the roads are driverless. Safety will be an issue for as long as they have to share the space with traditional cars.”
And with Uber reorienting its business to be a hub for all forms of frictionless transport, from flying taxis, to autonomous cars, ride-sharing, public transport, and electric bikes, a completely driverless future seems unlikely to ever arrive.
Cybersecurity risks
A running theme in the ITF report is that moments of risk will most likely arise when there is a no clarity about who is responsible at any given moment: the driver or the autonomous system. These moments of confusion could become commonplace unless systems are built from the ground up with cybersecurity risks in mind.
That’s because for driverless vehicles to reach the pinnacle of safety and efficiency, it’s likely that they will need to move beyond integrated sensors, toward a model that encompasses connectivity to other vehicles on the road and city infrastructures.
Here lies the inherent cybersecurity risk: to what extent will safe performance be conditional on connectivity to external networks? And on the system not being sabotaged by opportunistic hackers? As autonomous systems evolve to lose steering wheels and driver controls entirely, any system that is open to hackers via the internet would be terrifying prospect.
The fact that scenarios such as deliberately crashed cars, blackmail via car crash, kidnapping via hacked vehicle, and the use of autonomous trucks as weapons can be imagined means that they will happen sooner or later, if the system allows it.
The report points to two possible responses to the cybersecurity challenge. First, manufacturers and regulators need to adopt a common framework for cybersecurity with regards to autonomous vehicles. And second, that framework should ensure that safety systems operate independently of those external networks.
The problem is that, at the moment, these issues are locked in competitive IP battles.
Comparing the system design needed to that of an aeroplane, the report says, “Core safety-critical components [should be] functionally isolated on both a hardware and software level from non-critical components.
“Where these functional boundaries lie must be based on robust risk assessment. A second fundamental design principle is that the avoidance of crashes should never depend on access to shared external communication channels alone.”
Ironically, all of these complications may mean that autonomous vehicles face fewer obstacles – both literally and figuratively – in the sky.
The Safe System approach to driverless cars
The ITF report stresses the importance of the methodology behind driverless vehicle design. Instead of simply replacing human error with systems that don’t fatigue or make the wrong decisions at the wrong moment, the starting point needs to shift, it says.
Vehicles and traffic systems “should be designed in such a way that human fallibility does not result in death or serious injury. Conceived to ensure safety in a world full of human error, the Safe System can also deliver safety in a world of machine errors or unanticipated behaviours.”
Crucially, “a Safe System does not view road deaths and injuries as the inevitable price to pay for a highly-motorised society”, says the report – with an implicit dig at Uber and Tesla. Instead, driverless cars, infrastructure and traffic management should be designed to prevent crashes and ensure that when they do occur, the impact is never beyond the physical limits of the human body.
This kind of framework would help manufacturers and regulators factor accidents into the design process, instead of simply focusing on avoiding them in the first place. The result could be a reduction in fatalities and a shared responsibility for road safety on the part of drivers and manufacturers.
In line with a more holistic approach to driverless vehicle design, the ITF also suggests that safety features shouldn’t be a point of market competition, but a precondition of all operations.
“The relative safety level of vehicles […] should not be a competition issue. The regulatory framework should ensure maximum achievable road safety, guaranteed by industry, as a precondition of allowing these vehicles … to operate”.
Internet of Business says
We welcome a report that brings a dash of common sense and hype-busting to the market. While no one doubts the good intentions of companies such as Waymo, Uber, Tesla, and Apple, or the shift towards becoming technology companies by the likes of Toyota, BMW, Ford, and GM, the arrogance and insensitivity of some players in the face of fatal accidents has been unfortunate in the extreme.
Tesla has argued over semantics (why call your software Autopilot if it isn’t an autonomous system? is a question we put to the company that it never answered), and Waymo crowed that the Uber incident could never have happened with its technology, and so on. People dying at, or under, the wheels of a competitor’s software-controlled car is a matter of collective responsibility, not of product differentiation.
Meanwhile, Intel’s Mobileeye division seems to be teaching its autonomous cars to drive assertively and force other cars to move out of the way.
As the recent AAA report suggested, manufacturers are actively losing the support of consumers – certainly in the US – and arrogance won’t win them back. At present, vast sums of money are being poured into regarding human beings as the inconvenient element in transport systems. That needs to change.
Additional analysis: Chris Middleton.