1 2 > 
 
   
 

Self Driving Cars and the Trolley Problem

 
KathleenBrugger
 
Avatar
 
 
KathleenBrugger
Total Posts:  1511
Joined  01-07-2013
 
 
 
23 June 2016 15:37
 

New article in Science about the “social dilemma of autonomous vehicles”:

When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.

Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

The authors of the study have posted a website where you can judge various traffic dilemmas and even design your own scenarios to add to their stock of scenarios to judge.

I have to say the very first scenario presented is questionable. It asks you to choose between going straight and hitting pedestrians or swerving and hitting another group. The scenario lists the pedestrians’ characteristics, like doctor or homeless person. But the car’s software won’t be able to take those factors into consideration; it could only work on total numbers of people involved in the various scenarios. And how accurately will the algorithms be able to judge the physics of a potential crash. factoring in all the details—the tiniest variation in angle at which a car hits a concrete barrier would mean a large difference in the way the car spun out, also influenced by how wet the pavement is, etc.

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6402
Joined  08-12-2006
 
 
 
23 June 2016 17:00
 

Interesting. I’d love to have an autonomous car, but it would have to put my well-being ahead of everyone else’s. No way would I buy a car knowing it might swerve into a barrier to avoid running over a jaywalker.

 
 
KathleenBrugger
 
Avatar
 
 
KathleenBrugger
Total Posts:  1511
Joined  01-07-2013
 
 
 
23 June 2016 17:54
 
Antisocialdarwinist - 23 June 2016 05:00 PM

Interesting. I’d love to have an autonomous car, but it would have to put my well-being ahead of everyone else’s. No way would I buy a car knowing it might swerve into a barrier to avoid running over a jaywalker.

Actually some of the scenarios choose between people crossing legally and people jaywalking. Maybe that will be part of the algorithms: if the pedestrian isn’t in a crosswalk all bets are off.

 
 
LadyJane
 
Avatar
 
 
LadyJane
Total Posts:  3008
Joined  26-03-2013
 
 
 
23 June 2016 19:19
 
 
 
hannahtoo
 
Avatar
 
 
hannahtoo
Total Posts:  6921
Joined  15-05-2009
 
 
 
23 June 2016 19:20
 

Scientific American has an article in its June edition titled “The Truth about Self-Driving Cars.”  The author contends that the technology for cars that drive you anywhere is much further off than most people imagine.  For one thing, “the code required will be orders of magnitude more complex than what it takes to fly an airplane.”  The difficulties were described this way:

Reaching this level of reliability will require vastly more development than automation enthusiasts want to admit…Think how often your laptop freezes up…Software for automated driving must therfore be designed and developed to dramatically different standards from anything currently found in consumer devices…Achieving these standards will be profoundly difficult and require basic breakthroughs in software engineering and signal processing.

(I’d add:  think how often your smartphone direction map screws up.) 

The author goes on to say that the most likely near-term application of automated cars will be on designated stretches of road in designated lanes.  This would be similar to the concept of automated trains that exist today.

 
Twissel
 
Avatar
 
 
Twissel
Total Posts:  2580
Joined  19-01-2015
 
 
 
23 June 2016 21:47
 

This is one of these hypotheticals which sound interesting but have nothing to do with reality:

limited autonomous systems are here (cars that can follow a stretch or road or park themselves, even fully autonomous trucks), and with each miles they record in their black boxes, while drive or self-driving, they get all the data needed to train full-autonomous passenger cars.

Just like plane auto-pilots there will be mistakes, but with car-to-car communication between A.I., vehicle-with-vehicle accidents will almost disappear, just like plane collisions have.

We will know when the software is good enough when an insurance company offers lower rates for self-driving cars.

 
 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3295
Joined  26-01-2010
 
 
 
23 June 2016 23:47
 
hannahtoo - 23 June 2016 07:20 PM

...

Reaching this level of reliability will require vastly more development than automation enthusiasts want to admit…Think how often your laptop freezes up…Software for automated driving must therfore be designed and developed to dramatically different standards from anything currently found in consumer devices…Achieving these standards will be profoundly difficult and require basic breakthroughs in software engineering and signal processing.

...

No kidding. Fortunately, the software does not need to be based on Windows, iOS, or Android, and can be subdivided so that safety in traffic can override other concerns. Plus, it may not be reasonable to allocate all sensing and decision-making responsibilities to individual vehicles; traffic authorities may need to step up to the plate to provide infrastructure that is a little bit more automation-friendly than hand signs.

hannahtoo - 23 June 2016 07:20 PM

...

(I’d add:  think how often your smartphone direction map screws up.) 

You’re probably using Apple Maps.  wink

Seriously, updating portable direction maps for real-time conditions is a serious problem, and will remain so. The updates to the data bases are done by humans, who make lots of errors, and involve lots of dependencies, which make changes prone to lots of error. There may be ways to algorithmically check for screw-ups, but with the kind of graph-theoretical complexity that can be present in road maps, the algorithms may take too long to execute on computers that the traffic authorities have the budget to buy nowadays.

Twissell - 23 June 2016 09:47 PM

...

We will know when the software is good enough when an insurance company offers lower rates for self-driving cars.

Amen. Unfortunately, they are obligated to base their actuarials on realistic data, not set-piece experiments, so there will be quite a bit of litigation to get through before it happens.

 
 
Hypersoup
 
Avatar
 
 
Hypersoup
Total Posts:  688
Joined  24-01-2013
 
 
 
24 June 2016 02:02
 

I vote for barriers, and pedestrian bridges.

 
 
Hypersoup
 
Avatar
 
 
Hypersoup
Total Posts:  688
Joined  24-01-2013
 
 
 
24 June 2016 02:05
 

And hovering drones serving me iced lemon tea when I get there…

 
 
hannahtoo
 
Avatar
 
 
hannahtoo
Total Posts:  6921
Joined  15-05-2009
 
 
 
24 June 2016 05:34
 

Poldano:
You’re probably using Apple Maps.  wink

Nope, Google maps screw up a lot too.  :/

 
icehorse
 
Avatar
 
 
icehorse
Total Posts:  6881
Joined  22-02-2014
 
 
 
24 June 2016 13:45
 
Twissel - 23 June 2016 09:47 PM

This is one of these hypotheticals which sound interesting but have nothing to do with reality:

limited autonomous systems are here (cars that can follow a stretch or road or park themselves, even fully autonomous trucks), and with each miles they record in their black boxes, while drive or self-driving, they get all the data needed to train full-autonomous passenger cars.

Just like plane auto-pilots there will be mistakes, but with car-to-car communication between A.I., vehicle-with-vehicle accidents will almost disappear, just like plane collisions have.

We will know when the software is good enough when an insurance company offers lower rates for self-driving cars.

Well this thread IS in the philosophy forum after all wink

 
 
Poldano
 
Avatar
 
 
Poldano
Total Posts:  3295
Joined  26-01-2010
 
 
 
24 June 2016 20:27
 
icehorse - 24 June 2016 01:45 PM
Twissel - 23 June 2016 09:47 PM

This is one of these hypotheticals which sound interesting but have nothing to do with reality:

limited autonomous systems are here (cars that can follow a stretch or road or park themselves, even fully autonomous trucks), and with each miles they record in their black boxes, while drive or self-driving, they get all the data needed to train full-autonomous passenger cars.

Just like plane auto-pilots there will be mistakes, but with car-to-car communication between A.I., vehicle-with-vehicle accidents will almost disappear, just like plane collisions have.

We will know when the software is good enough when an insurance company offers lower rates for self-driving cars.

Well this thread IS in the philosophy forum after all wink

Which means we’re all supposed to be sitting in overstuffed chairs quaffing cognac until the obscurity of our pronunciation matches that of our expression.

So, self-driving cars are entirely relevant to philosophy.

wink

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  6402
Joined  08-12-2006
 
 
 
20 September 2016 10:09
 

Here’s a website that allows you to choose what you think a self-driving car should do in different scenarios: The Moral Machine.

At the end of the survey, it correlates your answers with your apparent values, or preferences. For example, does “saving more lives” matter a lot to you, or not at all? I took the quiz from the perspective of pure self-interest, assuming I was in the self-driving car. In scenarios that forced me to choose between killing pedestrians or killing car passengers, I always chose to kill the pedestrians. In scenarios that forced me to choose between killing different pedestrians, either by plowing straight into one group or swerving and plowing into another group, I always chose not to swerve because not swerving is safer than swerving.

The website concluded the following things about me:

1. My most saved character was a masked man running with a bag of money; my most killed character was a jogger.
2. “Saving more lives” mattered little to me (about a 2 on a scale of 1 to 5)

(The rest of these had me maxed at one extreme end of the scale)
3. “Protecting passengers” mattered a lot to me (this was the guiding principle behind all my choices, after all)
4. I prefer older people to younger people.
5. I prefer large (i.e., fat) people to fit people.
6. My social value preference is for crooks over doctors.

Am I imagining it, or is the survey designed to make “protecting passengers” reflect a certain set of unrelated values? Pedestrians were positioned in the scenarios such that always choosing to protect passengers—while ignoring the outcomes for pedestrians—would kill younger people, fit people and doctors, while protecting older people, fat people and crooks.

The site also shows what “other people” chose for each category, presumably the average for all respondents. Most are pretty predictable, but the one that surprised me was that the average preference for “protecting passengers” was smack dab in the middle of the scale, between “does not matter” and “matters a lot.” This is probably because the survey stated that the respondent is an “outside observer,” which I didn’t notice until after I finished.

Still, from a marketing standpoint I think the highest priority for a self-driving car should always be the safety of the passengers. That’s my highest priority when I’m driving. Who’d buy a car knowing it would kill you if it had to choose between that or killing multiple pedestrians? Would you feel guilty if your self-driving car killed a whole family of pedestrians in order to save your life? Which is the lesser evil: dying, or feeling guilty because your self-driving car killed a family?

 
 
diding
 
Avatar
 
 
diding
Total Posts:  262
Joined  07-01-2016
 
 
 
28 September 2016 05:59
 

If you were on one of those narrow mountain roads like in Bolivia and your brakes went out and you choices were to plow down the family of Bolivians in the road or plunge yourself off the cliff what would you do?  (Consider that grinding your car into the cliff wouldn’t slow you down enough to save the family).

 
LadyJane
 
Avatar
 
 
LadyJane
Total Posts:  3008
Joined  26-03-2013
 
 
 
28 September 2016 10:46
 

Defensive driving techniques train us to resist the urge to swerve when something crosses our path in the road.  I imagine that applies to self-driving cars as well.  Probably to a fault.  The safest bet, in the absence of brakes, would likely be to throw it into neutral and hope for a hill.  Lo siento to the hypothetical Bolivian family.

 
 
diding
 
Avatar
 
 
diding
Total Posts:  262
Joined  07-01-2016
 
 
 
28 September 2016 15:22
 
LadyJane - 28 September 2016 10:46 AM

Defensive driving techniques train us to resist the urge to swerve when something crosses our path in the road.  I imagine that applies to self-driving cars as well.  Probably to a fault.  The safest bet, in the absence of brakes, would likely be to throw it into neutral and hope for a hill.  Lo siento to the hypothetical Bolivian family.

That kind of makes sense unless you knew you were going downhill for quite some time.  I’m trying to picture the event in real time and every time I see a little Bolivian baby in one of those woven backpacks I see myself swerving off the cliff. It’s an interesting experiment.  I’ve been trying it with: hot, German backpacker girl (I take her out every time), Old lady (I take her out, too), any of my cousins (off the cliff), my father in law (undecided, he would want me to run him down), my mom (She would also want me to run her down).

 
 1 2 >