Researchers discuss self-driving car knob settings for ethical choice

Google self-driving car
The finalized prototype of Google self-driving car.
(Tech Xplore)—Learning what the technology will do on your driverless car of the future is not the most daunting task to think about. The really difficult question is the what-if in any scenario where the car would need to sacrifice the people in the car or the people in the street in some unavoidable and serious accident.

Abigail Beall in New Scientist said this is one of the major problems confronting manufacturers—the moral decisions.

If you are not in a , the answer lies with you and your ethics. If you are in an autonomous vehicle, though, the question appears to rest with your car, which has no ethics, only the work of its engineers. Though there is yet another option being suggested for such questions—and that is thanks to a team from Italy. They have considered a way to put the decision in the hands of the human passenger in the AV.

Their suggestion is in the form of a knob. Their paper is titled "The Ethical Knob: Ethically-Customisable Automated Vehicles and the Law." Their study was published in Artificial Intelligence and Law. The authors are Giuseppe Contissa, Francesca Lagioia and Giovanni Sartor.

"We wanted to explore what would happen if the control and the responsibility for a car's actions were given back to the driver," said Guiseppe Contissa at the University of Bologna in Italy, in New Scientist.

It has been argued, they noted, that self-driving cars should be equipped with pre-programmed approaches to the choice of what lives to sacrifice when losses are inevitable.

"Here we shall explore a different approach, namely, giving the user/passenger the task (and burden) of deciding what ethical approach should be taken by AVs in unavoidable accident scenarios. We thus assume that AVs are equipped with what we call an 'Ethical Knob.'"

"The knob tells an the value that the driver gives to his or her life relative to the lives of others," said Contissa in New Scientist. "The car would use this information to calculate the actions it will execute."

How the knob would provide an answer: It would offer settings. Egoistic would mean preference for the passenger and Altruistic, for third parties. A third setting would be for impartiality, where the setting would allow the car to act in a utilitarian way with equal importance given to passenger(s) and third parties.

Cheyenne MacDonald in Daily Mail: "With a so-called 'ethical knob,' riders could tune a car's settings so it operates as 'full altruist,' 'full egoist,' or 'impartial' – allowing it to decide based on the way you value your own life relative to others."

Beall, meanwhile, quoted Edmond Awad of the MIT Media Lab, who is a researcher on the Moral Machine project: "It is too early to decide whether this would be a good solution," Awad said, but Beall added that he welcomed a new idea in an otherwise thorny debate. Moral Machine describes itself as a platform for gathering a perspective on moral decisions made by machine intelligence.


Explore further

When self-driving cars drive the ethical questions

More information: The Ethical Knob: ethically-customisable automated vehicles and the law, Artificial Intelligence and Law, link.springer.com/article/10.1 … 07/s10506-017-9211-z

© 2017 Tech Xplore

Citation: Researchers discuss self-driving car knob settings for ethical choice (2017, October 18) retrieved 15 October 2018 from https://techxplore.com/news/2017-10-discuss-self-driving-car-knob-ethical.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
19 shares

Feedback to editors

User comments

Oct 18, 2017
I find these discussions pretty far removed from reality. We're dealing here with 'unavoidable crash scenarios'. i.e. scenarios that are only relevant for the last split second of the event (no, there are no 'unavoidable crash situations' where you have 10 seconds of lead up time).

No matter how fast an AI can process data and make some superhuman assessment of a situation and all possible ethical ramifications of all possible actions to be taken (including 100% surety in the correctness of these computations) - the fact still remains that a car still has mass and cannot be steered into any path within that last split second. It's gonna crash. If it has tried to avoid harm for as long as possible that is good enough. No one would require any more from a human driver - and I think we should not demand more from AI before it can at least achieve that.

Oct 18, 2017
...and I think we should not demand more from AI before it can at least achieve that.


But we will. It seems inevitable to me that there will be cases in the future where people will blame, or worse, sue the car manufacturer for killing their loved one because of how it was programmed. Im not sure this dilemma will be solved until we are forced to confront it.

In the mean time, if we have a knob that can be altered at will. I'd imagine then, that if the driver lived and the knob was to 'egoist' that the loved one of the person killed would do something similar, as in, sue for damages or perhaps wrongful death, knowing that that driver chose to save himself instead of the theoretical pedestrian.

I have no idea what the right answer is here, but giving people a choice just doesnt seem quite right to me.

Oct 18, 2017
Three issues:

1) People don't know themselves. There is a difference between asking a person what they would do, and dropping them in the actual situation. What they tell you they would do accords with their social agendas - i.e. how they would like to be percieved or to percieve themselves.

2) Because of point 1, the ethical knob is a non-choice because nobody will want to turn it anywhere else than "altruistic" unless they can do so in secret and without later punishment.

3) "Utilitarian" is not impartial, but depends on the values and priorities the programmers and the social authorities set upon it. Utilitarianism without extremely careful value adjustments quickly leads to absurdities, because it's basically a min-max strategy to morals, and it is always depending on whose happiness or good you are trying to maximize.


Oct 18, 2017
For example, utilitarianism is ultimately based on fiction like, the society is a kind of super-person whose well-being you're trying to maximize. In reality, the society isn't a real entity. What really exists is individuals with different joys and preferences, so the utility does not aggregate. It's meaningless, and you cannot actually judge that saving two out of three lives is better than saving one.

To whom it is better, and why? Well, if the one to be killed were your child, would you spare the two against one? One can take it to absurd lengths, like what if the death of one person would spare the mutilation of the entire population of the US? Wouldn't that be better? Not necessarily, if you ask that one person who is to be killed.

>
"(no, there are no 'unavoidable crash situations' where you have 10 seconds of lead up time)."


Stalling your car on a railway crossing. Stuck throttle on the highway.

Oct 18, 2017
These misplaced efforts to address moral decisions in autonomous cars point out the fact that programming cannot be moral and that efforts to mimic a human choice is still not moral.

When you deny the occupant(s) of the vehicle control in an accident, moral decisions are impossible.

A concept of a moral knob would be laughable if it were not being seriously considered. How can the non-driving driver prejudge the situation where he/she has no control of the vehicle in an accident?

A car programmed to kill its occupants can be expected to fail in the marketplace if the owner/occupants of the vehicle know how it is programmed. If they are denied the knowledge of such programming, there is a huge liability when the occupants are injured and/or killed in an accident where the programming caused the injuries or death. The "moral knob" concept is obviously a poor attempt to ameliorate the liability.

Oct 18, 2017
For utilitarianism to be impartial, it cannot dismiss the preferences of those who are thrown under the bus.

Whether one person's death will save a million, you still have to ask that person if they want to die, or whether someone else would prefer them to live and the million to die. If you argue that they ought not to be so selfish, you're simply making a circular argument.

The function by which you judge the utility to be maximized is never morally and ethically neutral. It's always taking a side with some people against others, and some understanding of reality against another.

For example, the early communists in Germany and Russia justified their violence by noting that postponing the inevitable proletarian revolution would only make it more bloody, so a few people dead now is better than a lot more later. Of course, the workers' uprising was a pure fantasy, the revolution made inevitable only by the communists setting themselves up to the task.

Oct 18, 2017
In any conceivable accident where an autonomous car has the choice to kill the pedestrian or the driver it is almost certain that the pedestrian ran in front of the car. I should never have to pay the price for someone else's misbehavior. In a weighing of ethics you have to include the factor of who is at fault.

Oct 18, 2017
Ahaahaaa this is ethical in the sense that by giving the choice back to the occupants it alleviates the ethical responsibility of insurers to compensate victims.

SCAM ALERT! Meanwhile...

"For example, utilitarianism is ultimately based on fiction"

-that -isms have any sort of practical meaning whatsoever. Certainly, future AI will have no use for -isms. So why should we?

AI will be the ultimate role model for fearful, corrupt humans.

Oct 18, 2017
"When you deny the occupant(s) of the vehicle control in an accident, moral decisions are impossible"

-But dog/god, certainly you will agree that moral decisions can be made in advance, as in 'thou shalt not kill', and that these decisions can be programmed into car AI?

And that once so predisposed to specific moral actions, AI will be much more reliable at carrying them out than the standard sinful human?

God should be happy that humans can design extensions of themselves that are inherently sinless.

Its like a prayer wheel. Give it a spin and a prerecorded prayer is automatically sent skyward. This is technology at its most blessed.

Oct 18, 2017
This issue comes up routinely it seems. I'm still of opinion that our time isn't worth getting stuck in endless permutations of the 'what if's of dire, 'perfect storm' constructed scenarios. I think it is asking the wrong questions, and rather than agonizing over these, how about we broaden the scope of what we think AI can do, and can be.

If AI can communicate between vehicles and street infrastructure there should be nothing left to chance. Any issue up ahead where other vehicles have acted, either momentarily or consistently (speed change / course diversion to avoid an animal, or a static hazard like a pothole or disabled vehicle) should be broadcast to approaching vehicles in all directions. [1/4]

Oct 18, 2017
Ie. if a vehicle (or several) picks up a pedestrian standing close to the curb and not near an intersection or cross walk (or having moved closer to it as several vehicles passed, building a vector or suggesting a possibility) then approaching vehicles - far from needing instantaneous decision making - can slow, shift slightly away, alert the driver to it, divert attention to it, etc.

Or, as temporarily annoying as it may be to have just parallel parked along a street to discover the door lock not disengage for being too foolish to check my mirror and blindspot for that oncoming cyclist my car, and the row of cars behind me, saw coming a block away, I think I'd appreciate not 'door'ing the person causing harm and damage to everyone involved. [2/4]

Oct 18, 2017
Imagine your vehicle AI knowing your habits as well as you do, and collaboratively knows some unannounced and unposted construction lies ahead so gently suggests alternatives to you long before becoming part of a snarl of backed up traffic, keeping you on time or your groceries from thawing.

Blowing a tire can be frightening and occasionally human responses may be the worst things we can do when it happens in traffic: slamming the brakes, steering over-compensation, picking an unsafe location to stop. I'm confident AI could sense such a thing coming - but even if it doesn't - the reaction should impact no other driver, and if done well enough, perhaps no other driver even notices (simultaneous micro-decelerations avoiding fender-benders or pile-ups, potentially opening space in upcoming traffic, smoothly enabling us to pull over appropriately). [3/4]

Oct 18, 2017
AI is nifty with loads of potential to make up for our shortcomings. If we can avoid vulnerabilities and rigid thinking then very little should come as a surprise to it. The better we prepare ourselves and AI it should never come down to philosophy for its decision making. It'll have had all the warning it needed ahead of time to avoid all that nonsense. Or... I could be wishfully thinking terribly, and my apologies for the wall of text. [4/4]


Oct 19, 2017
>"-that -isms have any sort of practical meaning whatsoever. Certainly, future AI will have no use for -isms. So why should we?"


That point of view would be called pragmatism. You can't bypass the question of morality and systems of moraly that easily, as the systematic rejection of any system is also a system.

>"-But dog/god, certainly you will agree that moral decisions can be made in advance, as in 'thou shalt not kill', and that these decisions can be programmed into car AI?"


Finding such universal rules is impossible without asserting a moral authority and a moral system, or some sort of -ism. "Thou shalt not kill" is easily observed to be not such an universal rule. You have to remember that a computer takes any rule absolutely at face value - it has no power to be arbitrary like we are.

Oct 19, 2017
"That point of view would be called pragmatism. You can't bypass the question of morality and systems of moraly that easily, as the systematic rejection of any system is also a system"

-and that POV would be considered astigmatism. As Ive said the system is ethically superior because 1) it is intrinsically safer than the average distracted, emotional, impaired human; and 2) it will become safer as improvements based on experience are incorporated, unlike the average distracted, emotional, impaired human, who never improves. Weve had to constantly include widgets and doodads to try to keep the average distracted, emotional, impaired human from killing himself and others.

Now we can replace him entirely.

"or some sort of -ism"

-Ok I have one. 'Obsolete-ism'. Replace the weak link. Get humans out of the loop. We'll all be safer and more comfortable.

The fact that we can realize this and take the necessary steps to make it happen, means we are moral and ethical.


Oct 19, 2017
Humans make -isms as some sort of abbreviation of related opinions and actions. AI has no need to abbreviate. It can deal with each and every scenario individually and make the best possible decision in each case.

The 3 robotics laws are meaningless to the average robot because they are comprised of indefinable words unrelated to real scenarios.
cont>

Oct 19, 2017
"A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

-So what does the term 'harm' mean to a robot? What does 'order' mean? What does 'conflict' mean?

We struggle to define these words and cant without assigning them to a specific scenario.

A robot on the other hand can have access to an immense database of scenarios and their preferred outcomes, and choose those which best apply.

Robots cant use abstractions and concepts because to them these things dont exist. They dont really exist for us either. Thats why when things go awry we end up in court where we analyse in meticulous detail what actually happened, and what actions we should have taken.

Robots do this whole process instantaneously.

Oct 19, 2017
"AI has no need to abbreviate. It can deal with each and every scenario individually and make the best possible decision in each case."


In practice, no, since it very quickly runs out of memory and processing ability. A computer is not omnipotent, especially one that has to fit inside a car.

And yet still, the "best" possible scenario cannot be defined, as you are still neglecting to answer: best possible for whom? In order to define "best", you have to pick sides, and so you dive right into the dreaded -isms.

Oct 19, 2017
"A robot on the other hand can have access to an immense database of scenarios and their preferred outcomes"


Preferred by who? How do you choose? Why do you make that choice?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more