When self-driving cars drive the ethical questions

When self-driving cars drive the ethical questions
Three traffic situations involving imminent unavoidable harm. (a) The car can stay on course and kill several pedestrians, or swerve and kill one passer-by. (b) The car can stay on course and kill one pedestrian, or swerve and kill its passenger. (c) The car can stay on course and kill several pedestrians, or swerve and kill its passenger. Credit: arXiv:1510.03346 [cs.CY]

Driverless cars are due to be part of day to day highway travel. Beyond their technologies and safety reports lies a newer wrinkle posed by three researchers, in the form of ethical questions which policy makers and vendors will need to explore.

"Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?" is by Jean-Francois Bonnefon, Azim Shariff and Iyad Rahwan. They are from Toulouse School of Economics, University of Oregon and MIT.

We are told that are capable of preventing road accidents and deaths in significant numbers. More recent discussions, though, point to the next chapter of driverless cars as to be more complex. When self-driving cars first appear on roads, the safety picture may be nuanced, say experts, where people are not necessarily prepared for the abundance of caution used by automated drivers.

Now three researchers are adding to the mix of concerns to think about. That is, they are posing raised by the presence of self-driving cars. Their paper on arXiv poses traffic situations involving unavoidable harm.

The question is over assessing the relative morality of different algorithms—who gets harmed and who gets spared. (a) The car can stay on course and kill several pedestrians, or swerve and kill one passer-by (b) The car can stay on course and kill one pedestrian, or swerve and kill its passenger (c) The car can stay on course and kill several pedestrians, or swerve and kill its passenger.

Is the passenger killed to save the other people? One may consider MIT Technology Review's calling it "an impossible ethical dilemma of algorithmic morality."

The authors' abstract stated, "It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm."

Continued MIT Technology Review: ''Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm's decisions?"

To be sure, "Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today," the authors wrote. "As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent."

The authors believe answers are most likely to come from surveys employing the protocols of experimental ethics. Overall, they wrote, the field of experimental ethics offers key insights into the moral and legal standards that people expect from autonomous driving algorithms.

The researchers conducted three online surveys in June. The studies were programmed on Qualtrics survey software and recruited participants from the Mechanical Turk platform, for a compensation of 25 cents.

Results? They were "interesting," said MIT Technology Review, "if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll."

The authors said, "Three surveys suggested that respondents might be prepared for programmed to make utilitarian moral decisions in situations of unavoidable harm. This was even true, to some extent, of situations in which the AV could sacrifice its owner in order to save the lives of other individuals on the road."

Offering his reflections on the research, Dave Gershgorn in Popular Science wrote, "Sure, can reduce traffic fatalities by up to 90 percent. And like the field of ethics itself, what happens in the other 10 percent is still up for debate."


Explore further

Rear-ending drivers add up in DMV self-driving accident reports

More information: Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? arXiv:1510.03346 [cs.CY] arxiv.org/abs/1510.03346

Abstract
The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.

Journal information: arXiv

© 2015 Tech Xplore

Citation: When self-driving cars drive the ethical questions (2015, October 24) retrieved 16 July 2019 from https://techxplore.com/news/2015-10-self-driving-cars-ethical.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
5725 shares

Feedback to editors

User comments

Oct 24, 2015
Before humans don't get significantly better at ethical decisions I'm not about to require optimal performance from machines.

The a) b) c) question is easily answered: As a driver you are not required to lay your life down on the road. You are required to obey all safety regulations and try to avoid danger to yourself and others (above following rules of the road!).
So if its a question between saving the passenger(s) or others the answer is easy: Passengers. Always.

Oct 24, 2015
A preprogrammed response to is almost certainly going to be worse than a human driver's judgement. There are certainly ethical questions about cars under programmed control. The first question is "Is it ethical to put vehicles on the road which remove human control of the vehicle?" The answer to that question is no.

We already have cars which brake unexpectedly and the preprogrammed intervention cannot be turned off. The driver who cannot control the brakes on the car can lose control resulting in injuries and death. How ethical or moral is that and who is liable when the car causes an accident?

If a self driving car intentionally runs over a pedestrian, is the manufacturer liable?

These questions go away if you stop placing decisions in the control of a computer.

Oct 24, 2015
I worked on self driving cars for the DARPA Grand Challenge, the same event where the Stanford team started the seeds for the Google self driving car. I can tell everyone that all the hype about driverless cars is exaggerated, and it will be nothing more than super useful driver "assist" for a long time. The cars will not completely drive themselves because of the legal and ethical responsibility such as outlined in this article. Those people just looking at the technology alone are looking at it from a techno-geeky perspective, while reality will follow a path that also considers legal and safety etc.

Oct 24, 2015
"an impossible ethical dilemma" is no different than an ethical equivalence.
Creating hypothetical situations that are impossible to decide is effete.
If you cant decide then it doesnt matter. Just make it a configuration choice for the owner.

Anyhow, who really believes that cars will ever make ethical choices? If someone jumps in front of them they will attempt to stop. Thats it. They arent going to choose to run down the old lady to save the baby. Just stop it.

Oct 25, 2015
I believe a self driving car should have the same pattern in situations like that and that is to do similar to a human, I imagine most people would simply hit the breaks but not turn off the road, but if it keeps changing how to react each time, based on a myriad of reasons that no human has time to process. The pedestrians have equally no idea which direction to jump/run out of the way should there be an option for them to do so. A human can some what judge if a pedestrian can get out of the way either by signalling with your hands - people can some what predict what people do behind the wheel.

When you know the expected behavior you can equally as a pedestrian try to get out of the way, and also avoid taking decisions that force the car into a situation in the first place. If you question what it might be trying to decide you might react and in fact end up running into it's new path choice.

Oct 25, 2015
It is insane. GPS spoofing,control system hacking, unreliable sensors in the wind, dust,snow,ice,cold,fog 85% of driving situations and more. And hence Google is testing in the sunny California with an army of technicians to maintain it. It is all hype and illusion for investor money. Not to mention legal and moral ramifications of life-death algorithms run by machines setting price for human life just like in the Fight Club. People, think before you utter something.

Oct 25, 2015
In the pictured dilemma for the car at the beginning of this article, it seems that an attempted panic stop is required regardless of the ethical choice taken. This type of stop precludes any turning as stopped front wheels won't cause the vehicle to turn.

Oct 25, 2015
"In the pictured dilemma for the car at the beginning of this article, it seems that an attempted panic stop is required regardless of the ethical choice taken. This type of stop precludes any turning as stopped front wheels won't cause the vehicle to turn."


That's what makes it so problematic. The car can attempt to turn, but it will do so at the expense of not being able to slow down, which will lead to the destruction of the car and driver.

The problem of course is thinking about it too philosophically. Why should we employ utilitarianism in the first place? Utilitarianism is already a contradictory philosophy that leads to absurd results if we take it seriously, such as enslaving everyone to make a few people extremely happy.

In practice, basic self-preservation of the vehicle itself should suffice - that's what we humans do. What causes minimum damage to it, causes minimum damage to anyone and everything else around, and that's the best there is to do.

Oct 25, 2015
@Eikka: We have ABS on every car since 1990...

Oct 25, 2015
I am in a self driving vehicle and a group of drunk idiots is crossing the street: would the car choose to save them and crash my head?
I am in a self driving vehicle and a group of boys wants to "joke" with the self driving vehicle, would the car choose to save them and crash my head?
I am in a self driving vehicle and a band of mafians want to kill me easily and send a group of thugs in the middle of the way, would the car choose to save them and crash my head?
My wild guess is that the car will ONLY have to follow the street rules: if you mistake, you get the consequences.
If the semaphore is red and you cross, the car will try to save you and the driver, if not possible you will be run over.
That's how it should work imho.
Otherwise I will NOT get a self driving car.
I wouldn't feel it safe and more I wouldn't feel it FAIR.
I don't want to lose my life because some idiots cross the street at the wrong moment.
The car should make similar choices to what the driver would.

Oct 25, 2015
Well getting hit by a fast moving car, usually results in a quick death
@Lex
not necessarily. this all depends upon the vehicle, speed, agility and fitness of the pedestrian and some other factors. People have been hit by speeding vehicles and drug distances and survived... and some people have taken slight hits at low speed and died.
there are a lot of factors involved in surviving a high speed impact from anything

IMHO- given that our initial reactions to anything tend to be based upon instinct and training (as Eikka noted above), then it appears self-preservation should be the primary underlying situation for the algorythm, with situational caveat's of "most life" or "survivability" as well as least damaging choice.

much like drivers are taught anyway (rules of the road) as, again, noted by Eikka


Oct 25, 2015
But like wise, from the pedestrians perspective, it's better the other way round
except that there are rules for travel and pedestrians often are just as ignorant or inattentive as modern drivers (Darwin award recipients in training?)

rules of the road should supersede everything: if a pedestrian is in the road where it is not permitted, then driver safety should prevail. (of course, we should also be smart enough to build protected crossings for pedestrians that do not require the use of the road... but that is just IMHO)
to be fair to all concerned, the car ought to kill all the pedestrians, and then detonate, and kill the passenger
interesting approach.... saves money, saves time, saves extended traumatic psychological support and meds...
i like it!
LMFAO

I do think this sums it up best, though (at least, IMHO)
In practice, basic self-preservation of the vehicle itself should suffice - that's what we humans do


Oct 25, 2015
The only ethical question that needs to be considered is whether machine drivers will make dilemmas like the ones described above rare or not.

And the answer is yes.

Machines are already able to prevent and resolve such issues much better than any human.

They are therefore an ethical advancement, no matter what the philo-wannabes would have you believe in their desperation to stay relevant.

Oct 25, 2015
there is only one ethical question. how does a distracted driver react. machine are a distraction to a trained driver. so train the drivers again and we can save lives, or allow a machine to kill us.

Oct 25, 2015
"@Eikka: We have ABS on every car since 1990..."


Yes, and it compromizes straight-line stopping distance for the ability to steer. Braking force, just like acceleration btw. is maximized with a certain small amount of slip, which also removes the lateral grip of the wheel and with it the ability to steer.

"Machines are already able to prevent and resolve such issues much better than any human."


Our AIs are dumber than a bag of hammers and lack 99% of the situational awareness, perception and comprehension of their surroundings, which is why they have to be programmed with such wide safety margins that they can't get into trouble. That's not an "ability to prevent and resolve" - that's a handicap and a disadvantage because the cars drive too carefully and end up causing traffic jams and accidents with real drivers.


Oct 25, 2015
The only thing the car computer has to do is to try to avoid the accident. I is as simple as that.
And this is nothing new, modern aeroplanes do it often. When a plane is falling, the computer "knows" perfectly were are populated areas and not, but it does not care, what it does is everything possible to recover and avoid the crash. In fact, the first thing it does is to disconnect the autopilot. Only the human pilot can take decisions like that at the very end.

Oct 25, 2015
@Eikka: Wrong, a "panic stop" with locked up wheels needs much more space than a proper ABS assisted braking. New ABS systems could also control each wheel individually for the maximum braking efficiency, no human pilot could do that.

Oct 26, 2015
What a computer car cannot do is come up with a creative solution. i.e. Non-fatal injury to 3 pedestrians being sideswiped or hitting just the corner of the car to avoid 1 fatal injury to 1 pedestrian.

Oct 26, 2015
This again?

These so-called 'ethicists' ignore the real ethics of the situation:

- Autonomous vehicle technology already yields transportation that's safer than human-operated vehicles. That will only get better going forward.

- The absurd 'hypothetical situations' they invoke are likely to be so rare, they won't even be a blip on insurers' books.

- These same ethicists have never bothered to construct the ethical case for *human operated* vehicles. We're left to wonder why they suddenly appear as a roadblock to autonomous vehicles.

The sad reality is that there are vested interests who are threatened mightily by autonomous vehicles, and there is an entire infrastructure of academics and think tanks in America whose sole job is to take money and churn out reasons why their benefactors should prevail. I think we have yet another instance of it in this half-baked, slanted 'analysis.'

Oct 26, 2015
Based on the preconception that I simply would not travel in a 'self-driving' car (self?) unless it valued my life above all else ... The devil is then in the details.

Does the simulated intelligence assess the action required for the minimal number of pedestrian deaths or drive straight into the entire group senselessly?

Handbrake turn? Just kidding .. Maybe.


Oct 26, 2015
The examples these 'ethicists' give are always so pure, so perfect. The vehicle's automation in their stupid hypothetical situations is always able to anticipate perfectly the outcomes of its actions and weigh them.

Nothing like that happens in real life. The people the vehicle is trying to predict are independent actors - and humans are not very predictable. Chaos is a regular feature of the macro world. Programmers aren't going to spend a lot of time agonizing about these perfect scenarios; they're going to program their vehicles to detect a risk and act to reduce vehicular energy, with reflexes much faster than human drivers can manage. Sometimes, someone will get hurt or killed. But it will happen far less often than with human drivers.

Factor in autonomous vehicles which can exchange data and coordinate with each other and stationary sensors. Those don't exist yet, but they will. Smart vehicles will arrive first; but smart roads won't be far behind.

Oct 26, 2015
In Oz we have compulsory Third Party Injury Insurance for drivers. That means if any driver injures or kills a person then the insurance company picks up the tab. On its own that make me pause to consider moral hazard. Add to the mix that Volvo (car manufacturer) has decided to accept liability for accidents involving its 'autonomous vehicles' and ...

There's no incentive to avoid collisions with people. Who (or what) would suffer any serious penalties?

More bucks for the insurance companies and lawyers?

Better IMO to hold the 'person owner' of the vehicle responsible.

Let's see how popular these robot monsters are then!


Oct 26, 2015
* A note on moral hazard. Fictitious example.

I am driving my vehicle lawfully on a dark rain swept road. A person runs across the road. I cannot stop. Beside me is another vehicle. I have a choice ... I swerve to avoid the person and impact the other car thereby involving property costs. I am (fictitiously) uninsured for property damage. Now, if I don't value the persons life ( it's just another meat bag) then I can simply avoid potentially costly car property damage and just maim or kill the foolish person. No property damage and a dead or maimed person, with no serious consequences. Bad luck for the stupid meat bag and some crocodile tears. The other drivers thank me for not smashing them and we'll all drive away. One pedestrian in a hearse or ambulance.

What's the 'robot car' going to do? Imagine preprogrammed insurance factoring.


Oct 26, 2015
In the US alone current technology kills more than hundred people and injures or disables more than 6'500 every single day.
I don't believe any new technology could do worse.
If it reduces accidents by 90% as stated, which I believe to be an understatement, then ethics mandates that we should push it as much as possible.
Society pays a high price, for every day this technology is delayed and ethicists doing so are unethical. We have plenty of time do improve it.

Oct 26, 2015
"Our AIs are dumber than a bag of hammers and lack 99% of the situational awareness, perception and comprehension of their surroundings"

-Obviously your definition of dumber differs from mine.

"In 2013 there were 543 fatal crashes in the U.S. involving drivers who were ill at the time of the crash, including those suffering from diabetic reactions, seizure, heart attack, high or low blood pressure, and fainting..."

"driver incapacitation to be the sole or main cause of 6.4% of 723 crashes sampled. In 4.4% of the crashes, the driver fell asleep and in 2% the driver experienced a seizure, a heart attack or a blackout"

"Every day, almost 30 people in the United States die in motor vehicle crashes that involve an alcohol-impaired driver. This amounts to one death every 51 minutes. The annual cost of alcohol-related crashes totals more than $59 billion"
cont>

Oct 26, 2015
"In 2013, 3,154 people were killed in motor vehicle crashes involv­ing distracted drivers... approximately 424,000 people were injured, which is an increase from the 421,000 people who were injured in 2012.

"average of 586 older adults are injured every day in crashes... In 2012, more than 5,560 older adults were killed and more than 214,000 were injured in motor vehicle crashes. This amounts to 15 older adults killed and 586 injured in crashes on average every day."

-As for AI cars...

"thanks to the bank of sensors and equipment at their disposal, they're much better than human drivers at spotting trouble coming from all directions – danger can be spotted from farther away and reacted to more quickly.

"software can calculate stopping distances, braking speeds and junction spaces with mathematical precision. Traffic should flow more smoothly, congestion is likely to be reduced and fuel efficiency rates should rise once the robot drivers take over for good."

-etc

Oct 28, 2015
-As for AI cars... "thanks to the bank of sensors and equipment at their disposal, they're much better than human drivers at spotting trouble"


But that's just isn't the case. The AIs we have have extreme troubles in correctly identifying what they are sensing with their "bank of sensors" in the first place because they are so primitive and have such limited computing capacity. They simply don't understand what they are seeing.

It's false to think that adding a wealth of data improves the AI's ability to interpret it surroundings, because in reality the computers on-board something like a Google car are already overwhelmed by the coarse and vague information coming from the rooftop lidar. Attempts at using camera vision for better detail have resolutely failed due to the unsophisticated state of modern image recognition algorithms.

Giving the AI more data is like trying to show HD video to a mollusc. Its brain just can't comprehend it.

Oct 28, 2015
"@Eikka: Wrong, a "panic stop" with locked up wheels needs much more space than a proper ABS assisted braking. New ABS systems could also control each wheel individually for the maximum braking efficiency, no human pilot could do that."


I didn't say it would. I said a certain amount of slip will maximize braking force - that doesn't imply the brakes are locked.

No human can maintain the perfect amount of slip to minimize braking distance, but that wasn't the point. The point was that there is a tradeoff, because maximizing braking force compromizes steering, and steering compromizes braking - and the ABS is limited by the same physics.

Your car is heading one way, and trying to make it go the other way while braking puts the wheels sideways to the direction of travel, ABS or no ABS. You can't brake and make a tight turn at the same time, so the computer has to determine whether to attempt to brake in a straight line, or turn and slam into a wall at high speed.

Oct 28, 2015
Or to put the same thing in other terms: in order to brake and turn at the same time, you have to apply more force on the wheels than just turning or braking.

Because the car has to gain angular momentum - inertia is resisting the turning - which means there's a sideways force on the wheel, and at the same time lose linear momentum, which puts a lenghtwise force vector on the wheel. When you sum these two vectors, the resulting force is greater than either one alone - it requires more grip out of the wheel to perform a brake-turn, and if there isn't enough the car will simply plow ahead or spin.


Oct 28, 2015
How about a driverless car that, in this situation, kills its passenger, converts into a coffin and then drives itself to the cemetery.
I know, a ridiculous question deserves a ridiculous answer, since we will never allow any vehicle that has to make these decisions, on the road.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more