A new survey from MIT into the ethical challenges of autonomous vehicles reveals both global preferences and regional variations in answers to some tough questions. Chris Middleton reports.
There are 1.2 billion cars in use worldwide, and every year 1.2 million people die on the roads, meaning that one person loses his or her life for every 1,000 cars made.
Recent figures from the US government reveal that over 37,000 people died in motor vehicle crashes in America alone last year. Ninety-four percent involved driver-related factors, such as distraction, alcohol, speeding, or illegal manoeuvres.
The inescapable conclusion is that human drivers are the biggest danger to themselves and to other people.
So getting rid of the driver would appear to be the logical answer. It would also free up the 293 hours that the average American spends behind the wheel of a car every year, according to American Automobile Association (AAA) figures. At least, in the long run.
But a future in which people no longer own or drive cars themselves is a long way off. In the meantime, driverless vehicles will have to share the road with traditional automobiles.
More significantly, they’ll have to co-exist with vulnerable humans: people crossing busy roads, wheeling prams, sitting in wheelchairs, riding bikes, standing on street corners, and generally behaving in a messy and (perhaps) unpredictable way.
Yet despite the genuine commitment of mobility companies to make our roads safer and our air less toxic, there will come a time when an autonomous car will have to make a life or death decision.
Who lives?
This gives rise to some classic ethical conundrums.
For example, in situations where death or injury seems unavoidable, should a computer opt to take an action that is likely to kill the driver/passenger, or the pedestrian who walked in front of the car?
Should a driverless car swerve to hit a couple of people, rather than a group of bystanders? Or strike an adult instead of a child? And who might be responsible for these deaths?
When an Uber test vehicle killed 49-year-old Elaine Herzberg in March as she wheeled her bike across the freeway in Tempe, Arizona, the ethical can of worms opened by the accident became all too apparent.
Was the safety driver responsible for not watching the road? If so, why didn’t Uber’s autonomous system identify a woman wheeling a bicycle until it was too late? And why were the test Volvo’s own safety systems, which might have prevented the accident, disengaged?
Imagine the lawsuits that would be ongoing today if the dead pedestrian had been the CEO of a Fortune 500 company, rather than a homeless woman.
Or, imagine a future autonomous car electing to strike, say, a disabled person, another woman, a child, or someone from an ethnic minority, rather than a group of middle-aged white men.
In the US, researchers at the Massachusetts Institute of Technology (MIT) have looked into these problems, with a global study drawing in over two million online participants “from over 200 countries”, it says. The aim was to examine different versions of the ethical conundrum known as the ‘Trolley Problem’.
This involves scenarios in which an accident is imminent, and the driverless vehicle (in this case) must opt for one of two potentially fatal options – such as swerving towards a couple of people, rather than a larger group.
The Moral Machine
To conduct the survey, the researchers designed what they called the ‘Moral Machine’, a multilingual online game in which participants could state their preferences in a series of dilemmas that autonomous vehicles might face.
Some of the questions were more intriguing than others. For instance: If it comes down to an either/or choice, should the car spare the lives of law-abiding bystanders, or law-breaking pedestrians – people who might be jaywalking, for example? Most people in the survey opted for the former.
In a future society where people’s reputations are governed by social ratings and popular app usage, such a hypothetical question could become all too real. It’s conceivable that an autonomous vehicle might be able to tell a law-abiding citizen from a serial offender.
In 2020, China – where more and more citizens use the same WeChat app to network and pay for goods – will roll out just such a compulsory ratings system (it’s already in voluntary use). So it’s possible that a future crashing car may decide to take out a criminal. How’s that for an episode of Black Mirror?
“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” said Edmond Awad, post-doctoral researcher at the MIT Media Lab and lead author of the new paper outlining the results of the project. “We don’t know yet how they should do that.”
The Moral Machine compiled nearly 40 million individual responses from around the world. The researchers analysed the data as a set, but also broke out participants into subgroups defined by age, education, gender, income, and political and religious views.
The team found few significant moral differences based on these characteristics. However, they did find clusters of preferences based on cultural and geographic affiliations.
Overall, the researchers found three elements that people most agreed on. People generally believed in sparing the lives of: humans over other animals; the many rather than the few; and the young, rather than the old.
But it wasn’t straightforward: the degree to which respondents agreed or not with these principles varied among different groups and countries. For example, MIT found a less pronounced tendency to favour young people in some parts of Asia, where many cultures honour age and experience over youth.
Conversely, respondents in southern countries had a relatively stronger preference for sparing young people over the old, said MIT.
Public debate
“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now,” says the report.
“Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”
This is a key point when individual countries, such as the US and China, are locked in a race for dominance in driverless transport.
A recent AAA research found public support for driverless technologies waning in the US, in the wake of the Uber and Tesla accidents.
“The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule,” continued Awad.
“What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions.”
Internet of Business says
This is a timely survey, with the full results published in the journal, Nature. And other experts agree that the core debate should be about ethics and accountability, and not whose technology is best.
For example, our news report on Addison Lee’s plan to bring driverless taxis to London by 2021 found Cambridge Consultants’ machine learning expert Dr Sally Epstein slamming the focus on technology over ethics and transparency.
She said, “When fully autonomous vehicles do finally arrive, explaining how their decisions are made, particularly following accidents, will be much more important than any statistical proof that they experience fewer accidents than with humans at the wheel.”
But while the MIT report itself is fascinating, insightful, and useful, it suffers from boiling down an important debate to a set of binary options. This risks reducing ethics themselves to either/or answers to received, utilitarian questions.
What about option three? What if neither option in the question is acceptable? And who questions the questioner?
After all, coding the instruction ‘Kill a criminal rather than a law-abiding citizen’ into an AI system would itself present a moral hazard to society, even if it is in response to a majority view.
That criminal might be a good person who made one mistake, after suffering a lifetime of hardship and abuse, while the law-abiding citizen who lives might be a terrible individual who has contributed nothing to society.
Asking a machine to decide who lives and who dies can’t be reduced to a simple set of binary options in this way, like a switch in a microprocessor.
At present, there is little evidence – outside of China, at least – that consumers actually want the mass introduction of autonomous transport, despite the problems it may solve in the long term.
Connected, smart, electric vehicles with driver-assistance systems, yes. But mass autonomy? Vendors need to do far more to convince citizens of the benefits of that – especially in the US, where the ‘lone driver on the open road’ is core to the American Dream.
Of course, others may argue that that is the real problem.
What do you think?