Jump to content

The ethical dilemma of self-driving cars


RainDreamer

Recommended Posts

With the advent of self driving cars, we might see something similar to Asimov's 3 laws of robotics. What do you think? How should we program our self-driving cars to make those kind of split seconds decisions to us, but a life time of deliberation for them?

Edited by RainDreamer
Link to comment
Share on other sites

You don't.  These dilemmas are made up to sell magazines.  Any self driving car controller uses machine vision to look for a clear path.  It's a probabilistic thing - no algorithm exists that absolutely guarantees the path is clear, but there are "good enough" solutions.  

If the path is completely blocked, the car tries to find the course of action that will minimize damage, but this is only using the car's relative crude sensors and limited ability to tell what is in front of it.  Realistic ones won't be able to tell the difference between a yellow cargo truck and a school bus, just that there is a large vehicle shaped obstruction in front.  

Anyways, the movement planner is just going to apply the brakes to maximum and aim for the least blocked path.  If a school bus and a bread truck are blocking the road, and a crash is inevitable, and there are no other alternatives and the school bus is 5 feet back from the bread truck, the automated car is going to aim for the school bus because it calculates the energy at collision will be lower since the car will have had 5 more feet to brake.  

Some automated vehicle teams are working on extremely fancy software solutions that will allow the car to truly consider every option, including drifting and forms of controlled skids in order to evade obstructions.  So under some circumstances, you might come around a corner on an icy road and your car goes into a controlled drift around the curve.  But it's not going to be the perfect decision every time.

These ethical dilemmas are bunk because the car won't know what the alternatives are.  Engineers who build these things are far more concerned with getting the machine vision that finds obstructions to be 'dead on' so the car doesn't crash into things it can't detect, and getting the software to be reliable so it doesn't have a software crash during driving.

As for ethics : "find the least dangerous route (to the car), from the alternatives in the vehicles path" is pretty good.  Over a long term, if every car is out to protect it's own occupants from the energy of collisions, this also protects people outside the car from those collisions.  Even if there are edge cases where the car chooses to crash into a school bus or truck full of explosives because it's the lowest energy alternative, and lots of people are killed during the edge cases, it's still going to reduce the overall death rate over human drivers.

Edited by SomeGuy123
Link to comment
Share on other sites

This, its an non issue, an more way practical problem is for the automatic to consider then to take the car off the road and aim for the ditch.
its usualy an far better solution than hitting another car from behind. 
Extra bonus during the winter as the braking distance is longer and the snow will slow the car down nicely while protecting it. 
No its not always an option but then it works it works well. 

Link to comment
Share on other sites

12 minutes ago, RainDreamer said:

So will the programmers of the car be responsible for an accident cause by the car due to its programming? 

In practice, no.  Automated car systems will have to be developed by massive corporations with bottomless pockets who also will purchase additional insurance coverage.  The individual programmers won't face individual responsibility.

 When automated cars crash and cause accidents, the whole matter will get dragged into court.  On the bright side, automated cars should have amazingly clear event records that include high resolution video taken from multiple angles, a detailed software log of every decision made by the car and the key variables used to make that decision, and so forth.  

This means in cases where the car wasn't at fault it should be possible to show this in court.

The problem these liability issues create is the courts don't really have any absolute limits on damages and the lawyer fees are very expensive.  A jury could award the plaintiffs a hundred million dollars for a single crash.  And unlike GM cars, it will be obvious when the automated car killed someone.  (while GM, since it doesn't keep detailed event records and the car doesn't drive itself, can hide behind doubt - "did that faulty ignition switch really guarantee a death?")

For this reason I don't know if automated cars will ever really take off.  How will the manufacturers stay in business if they say, kill 10 people (even if they statistically save 1000 lives), and get slammed with a billion dollar lawsuit each death?  The courts cost an immense bribe to even show up for a case (lawyer fees) and the car manufacturers won't get any credit for the 1000 lives they saved if they are facing a case for killing 1 person.

The fix for this is the same thing they do for vaccines.  If you are injured by a vaccine, a board reviews your case and pays a fixed compensation.  No point in lawyers - the government has denied you the right to sue.  The reason to do this is that if automated cars really save 100 lives in vehicle crashes for every person they kill, it's beneficial to society if we have them.

But it'll take years for such liability exemptions to ever be formed into law, if ever - it could hold up the development of automated cars for decades.

Edited by SomeGuy123
Link to comment
Share on other sites

Self-driving cars are, of course, much better. It removes the human error out of the equation, and thus eliminate a major source of accidents. However, on the way we develop these cars, we have to think about the direction we want it to advance. That is why I mention about the laws of robotics - what do we want our self driving cars to do in order to fulfill their purposes? Will we have a set of laws like:

1. Protect human lives

2. Protect self from damage

3. Get to destination as fast as possible

 

Or maybe:

1. Protect owners' lives

2. Protect self

3. Following traffic laws

4. Get to destination as fast as possible

 

And so on?

Those objectives will help us think about how to minimize risk to human while ensuring car effectiveness.

Link to comment
Share on other sites

a ) A robodriver requires an efficient wireless net to effectively drive.
b ) The choice problem is actual only on large roads with high traffic, not on a desert road.
c ) Smartphones, navigators, etc, definitely keep a wi-fi connection inside all these cars.

So, we can presume that all listed vehicles are permanently connected to an outside communication infrastructure and permanently report about themselves at least to wi-fi/4G operators.
That means they all can be under all-seeing eye of a local Skynet which can convoy them all time on their road.

So, not a "car/driver" would make a decision, but a big and clever computer somewhere in a city traffic department in twenty miles far from there.
In this case:
1. There is no "crash myself or not" because no "myself", just 200000 of car virtual objects in Skynet's mind.
2. A choice is absolutely clear: minimize casualties.
Either a car asks a traffic Skynet: "What should I crash into?" And gets recommendation: "Lock your manual override. Keep forward.".
Or just a traffic Skynet enables Emergency Remote Override on several cars and drives them on its own suggestion.
 

Link to comment
Share on other sites

34 minutes ago, kerbiloid said:

a ) A robodriver requires an efficient wireless net to effectively drive.


Since when? No self-driving car in existence receives orders remotely. They are all 100% self-contained driving solutions and work perfectly fine without a central supercomputer. Those cars which do have permanent mobile uplinks have them for different reasons.

Link to comment
Share on other sites

Not orders. Navigation, so on.
I mean, this problem appears only on streets and highways, not amidst a desert with no mobile communications.
So, we can presume that a robocar is always online.

As it is always online, we can provide it with some Emergency Online Protocol.

"Hello, MasterControl. This is MobileObject id=2398562985. Can't evaluate an optimal route. Please, give me your suggestion. Priority: Highest.".
"Hello, MobileObject id=2398562985. This is MasterControl. Emergency Protocol Enabled. Lock your manual override. Keep forward."
"Roger, MasterControl."
 

Edited by kerbiloid
Link to comment
Share on other sites

4 hours ago, RainDreamer said:

So will the programmers of the car be responsible for an accident cause by the car due to its programming? 

I imagine that precedents established in aviation would be relevant. Autoflight systems have existed for decades, yet the pilot in command is still always the human pilot. The autopilot is just a tool that the human pilot uses to operate the aircraft. 

People being people, some will certainly get up to shenanigans while their car's auto drive system is manoeuvering their vehicle. Even so, I imagine that the legal responsibility for how the vehicle is operated (including the expectation that the human not be up to something in the back seat and could take over from the machine at any time) would remain with the driver.

Edited by PakledHostage
Link to comment
Share on other sites

kinda, vehicles, car system hacking will make the lawyer, court and a bunch story teller happy at some point, no doubt that shenanitralalala are only a myth.
Tried thoose kind of system with car like 20 years ago so yup nothing real new.

This thread may not bode bode well , but may use to happen often after april no fool joke in here.

If whenever the whole planet car park made the transition things get differents,  while some car will be drove by human and others by cpu thing will probably get a little messy ...

Edited by WinkAllKerb''
Link to comment
Share on other sites

If the car was rally smart, it would realize that it had an unstable load it would communicate with the truck in front to slow down and move right to exit while informing the other cars.

The self driving car could alert the motorcycle to slow down and move  to the right part of its lane therefore allowing two vehicles in the space of one. 

Problem averted. 

See in a auto-driverless car world the cars seek to behave as an anticipatory fluid, collisions of any kind increase viscosity of the state. (collisions are frictional interaction and frictional interactions cause viscosity). Consequently as soon as car 1 anticipates friction it alerts cars that are behind its most forawrd position to slow down and find room away from the axis of friction, this includes cars following car 1, since thse cars will also cause friction. 

So the motorcyle would slow and move over anticipating that its lane would become more viscous if it dies not, the car behind it would slow down allowing car 1 to move in safely. 

Link to comment
Share on other sites

I'm pretty sure this is a small problem hugely compensated by the advantage that self-driving cars have : ideally, once every car is automated, car accidents should drop to almost zero, since the cars can communicate between them, for example one car will tell others that it has a problem and is about to slow down... They do not exceed speed limits and keep the right safety distance (that no one respects nowadays) 

Etc. Leaving the only accidents to car malfuction, wich actually is a very small fraction of car accidents in the world

Link to comment
Share on other sites

My view is that ethics is essentially a heuristic thing, evolved and shaped for the usual situations of society. Ethical dilemmas frequently postulate unrealistic situations, combined with unrealistic certain knowledge about it.

And it may be similar with self-driving cars. Their programming isn't going to be a whole bunch of principles and if-then-else statements worked out by a programmer, it's going to be more of a heuristic approach and the fine details won't actually be *known* to the designed. Neither will they have perfect information, not least because they can never know how another vehicle is going to act. It's even quite plausible that the self-driving vehicle has distinct code paths for emergency reactions, in which maybe it isn't even worth wasting the time to fetch the description of the neighbouring vehicles from memory, it's best to hit the brakes and steer to avoid the nearest object.

Link to comment
Share on other sites

I wonder...if all cars becomes self driving cars with onboard computer and network capability...can the whole traffic flow act as a hive mind system? So many computers in such close proximities. Each car on the street increase its computational power, and thus it can conceivably detect and control the entire traffic to react to any emergency or accidents. No more car pile up, no more running into each others because they can't predict what the other car would do, etc.

We will have a supercomputer in every city like this...

But perhaps we should go back to the ethics part of this topic, in that, yes, it is safer to remove the human element from driving to avoid human errors, but then you still have a hulking piece of metal travelling at deadly speed and eventually something will go wrong. So what should we design our cars to do when things does go wrong and it has to make a choice it can't get out of?

Link to comment
Share on other sites

36 minutes ago, cantab said:

It's even quite plausible that the self-driving vehicle has distinct code paths for emergency reactions, in which maybe it isn't even worth wasting the time to fetch the description of the neighbouring vehicles from memory, it's best to hit the brakes and steer to avoid the nearest object.

From a computer's perspective, cars are slower than a glacier from a human's perspective. While human reaction times are in hundreds of milliseconds, computers can react in nanoseconds. That's roughly the difference between a few seconds and a decade.

Link to comment
Share on other sites

I'd say do it with a utilitarian approach. Attempt to minimize damage as much as possible to anything or anyone that might be involved in such a crash.

Let's postulate an event: An automated car finds its brakes losing effectiveness, while running at highway speeds in a four-lane highway. In front if it are two cars stopped by a traffic jam ahead; one is a minivan carrying 6 people, the other is a city car carrying 2. The automated car's steering system is functional, so the car can control where it's heading. The highway's road shoulders are too tight for the automated car to pass the two cars ahead, so one or both will inevitably be hit.

What should the automated car attempt to do? Hit one?(which?) Hit both by clipping their bumper corners? Attempt to avoid both by running into a safety rail or going offroad?

Link to comment
Share on other sites

42 minutes ago, Jouni said:

From a computer's perspective, cars are slower than a glacier from a human's perspective. While human reaction times are in hundreds of milliseconds, computers can react in nanoseconds. That's roughly the difference between a few seconds and a decade.

Not necessarily. While the computer would technically have a faster reaction time once it recognizes danger, it's somewhat limited in what it can do in that time. I.e, If you tell a computer and a human to press a button when a light comes on, the computer will always win because it's such a simple operation. If you were to program a computer to consider everything that a human mind might consider in those few seconds (things like whether to run into the bread truck or school bus) the computer would probably have some competition. And you'd be famous, because you'd have constructed the world's first artificial intelligence. Of course, when you cut out those kinds of considerations and make it as simple as "what course of action results in the lowest force exerted on the passenger," the computer wins by default.

EDIT: Also, if a self-driving car puts me so close behind a truck that it can't stop in time if something falls off, I don't want one.

Edited by Vaporo
Link to comment
Share on other sites

1. 99.999% of automobile accidents are driver error, so the chance that driverless cars won't be safer is small.

2. The ethical question I see is: would a car ever be programmed to minimize harm to all humans or will be to programmed to minimize harm to me, and my passengers. It's the train switch dilemma---you see a locomotive heading towards 5 workers past a set of points, and you can throw a lever, putting the locomotive onto a siding where it will only kill 1 worker, you have to pick one. Generally people throw the lever, actively participating in the death of 1 person to avoid the loss of 5. If I own the vehicle, I want it set to maximize the survival of the occupants of my car, period---it's the train switch where the lone worker is my kid, so tough luck on the 5 workers.

Here's a self-driving car analogy: Car is clear to move through an intersection, and detects a car about to run the light. In the crosswalk is a slowly moving group of 10 old people with walkers (crossing parallel to the self-driving car's path). The car could, with sufficient computing (I'm not thinking early models here, but who knows) see that it could continue, and serve as a barrier between the 10 pedestrians and the oncoming, light-running vehicle, deciding that my car, with at most 4 people aboard is a good trade for 10.

Edited by tater
Link to comment
Share on other sites

6 hours ago, kerbiloid said:

Not orders. Navigation, so on.
I mean, this problem appears only on streets and highways, not amidst a desert with no mobile communications.
So, we can presume that a robocar is always online.

 

nonsense. Navigation doesn't require you to "be online". GPS is receive-only.

And it works great in deserts with no mobile communications too. Of course if there are no roads your car satnav system will have trouble knowing where to go because they're stupid and can only calculate routes over roads.

Link to comment
Share on other sites

Self-driving cars poll available data far faster than people can. While you might drive through a neighborhood aware that a kid could possibly run into the street once you notice the kid there, the car can observe this, and say to itself millisecond by millisecond, could I stop now? How about now? In this way, the car can subtly adjust current path and speed over time timeframes to maximize the chance of not hitting the possible threat. The large issue is not having the vehicle paralyzed. 

This is a good watch:

 

Link to comment
Share on other sites

This is a case of people superimposing emotions and ethics into machines which simply do not (and can not) care. It's never going to make calculations based on loss of human life, because it won't be thinking about people (or anything, for that matter), but rather determining if it's path is clear and if it needs to stop or not. That's all.

Edit - I also believe they can't possibly be any more dangerous than the teenager in the next lane who is busy texting or putting on lipstick or otherwise showing general disregard for their life and the life of everyone around them.

Edited by Randazzo
Link to comment
Share on other sites

I'm not giving them emotions, I'm presuming that they could be programmed to minimize harm at some point in the future. In the above TED video, imagine the wheelchair/duck scenario on a single lane, 1-way street, only enough road space for 1 vehicle (perhaps cars parked on both sides of the street). The car's sensors detect a huge dumptruck closing from behind. It's not going to stop. The only egress for the car is through the idiot in the wheelchair---punch the gas, smash the chair, then duck to the side of the road and let the truck blaze past. Stationary possibly kills people in your car, AND you run over the wheelchair anyway. Ramming the chair gets you the heck out of the way of the truck with only the chair driver and duck harmed.

Edited by tater
Link to comment
Share on other sites

14 minutes ago, tater said:

I'm not giving them emotions, I'm presuming that they could be programmed to minimize harm at some point in the future. In the above TED video, imagine the wheelchair/duck scenario on a single lane, 1-way street, only enough road space for 1 vehicle (perhaps cars parked on both sides of the street). The car's sensors detect a huge dumptruck closing from behind. It's not going to stop. The only egress for the car is through the idiot in the wheelchair---punch the gas, smash the chair, then duck to the side of the road and let the truck blaze past. Stationary possibly kills people in your car, AND you run over the wheelchair anyway. Ramming the chair gets you the heck out of the way of the truck.

I was referring to the OP, but in the situation you describe I still doubt there'd be any consideration of passengers lives involved. The car would determine if IT could stop before hitting the obstacle in front of it and nothing more, the vehicle approaching from behind is not something it can control.

The programming is never going to recognize that person in a wheelchair as a person in a wheelchair, it's just a thing in the way, and it doesn't recognize itself as carrying people. When it does reach that point, we've probably got to check and make sure we don't have new overlords.

Edited by Randazzo
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...