Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Driverless car safety revolution could be scuppered by moral dilemma

'To align moral algorithms with human values, we must start a collective discussion about the ethics of autonomous vehicles'

Ian Johnston
Science Correspondent
Thursday 23 June 2016 20:36 BST
Comments
Would you get inside a driverless vehicle that would kill you to save others?
Would you get inside a driverless vehicle that would kill you to save others?

Driverless vehicles could theoretically reduce the number of accidents by as much as 90 per cent.

But this bright vision of the future – in which the number of fatalities on Britain’s roads could be reduced from about 1,700 to just 170 – is under threat because of a set of seemingly intractable moral dilemmas.

New research has found that people are in favour of autonomous vehicles (AV) designed to minimise the number of casualties even if this means deliberately killing the passengers – but only for other people.

When asked what kind of AV they would buy, they were strongly in favour of those which protected the occupants at the expense of others.

Writing in the journal Science, academics in the US and France suggested this might mean laws would have to be brought in to ensure only the ‘utilitarian’, casualty-minimising vehicles were available on the market.

However they also warned that such legislation might be “counter-productive” and threaten to derail the dramatic reduction in accidents promised by widespread use of AVs.

“Although people tend to agree that everyone would be better off if AVs were utilitarian (in the sense of minimizing the number of casualties on the road), these same people have a personal incentive to ride in AVs that will protect them at all costs,” they wrote.

“Accordingly, if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so.

“Regulation may provide a solution to this problem, but regulators will be faced with two difficulties: First, most people seem to disapprove of a regulation that would enforce utilitarian AVs.

“Second – and a more serious problem – our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether.”

And AV’s will need to be programmed with “moral algorithms” designed to make some highly complex decisions, the researchers said.

“Is it acceptable for an AV to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the AV than for the rider of the motorcycle? Should AVs account for the ages of passengers and pedestrians?” the academics pondered.

“To align moral algorithms with human values, we must start a collective discussion about the ethics of AVs – that is, the moral algorithms that we are willing to accept as citizens and to be subjected to as car owners.”

The researchers also suggested courts might have some difficult decisions to make in a world where computers make debatable moral choices.

“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?” they asked.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in