Jump to content

The ethical dilemma of self-driving cars


RainDreamer

Recommended Posts

It's sorting hazards, and knows that traffic cones are cones. The programming must certainly allow for the vehicle to hit a cone in lieu of hitting a pedestrian, for example. I would hope the vehicle would hit a squirrel rather than risk an accident as well. all hazards are not, and should not be considered equal, if the vehicle can discriminate between them. Also, the car must already be programmed to proactively avoid some hazards from behind. A  fire engine approaches the intersection you are in, with your vehicle blocking the road. You were going to go straight, the vehicle must already be programmed to make a right turn on red and get out of the way.

Link to comment
Share on other sites

Yeah.  Certain obstructions like traffic cones, the vehicle will know are "less solid" than others.

Eventually, the neural net might sometimes correctly identify a pedestrian in a wheelchair or a pedestrian.  Google has demoed it recognizing cyclists.  It's all about whether the pedestrian looks like the pedestrians they tested it on - cue the outrage when it doesn't recognize people of a radically different race and dress than the people near the Google campus.  

It'll just perceive "people" like that as "super bad to crash there, choose anywhere else but them"

Again, it won't be perfect.  It might swerve out of the way of some pedestrians in the road and hit the side of a bus full of senior citizens.  That's because the car didn't know the bus was full of people, it just perceived a truck or large vehicle, but it recognized the pedestrians from their stride/their arms and legs/other signifiers and in it's code, it sees a truck as a lower priority "obstruction" than people.   

None of the "ethical dilemmas" other posters have made up are there.  If there are people, evenly spaced, in the way of the vehicle, the car will just decide to run over the one in it's drive path, because the shortest and safest braking maneuver is to not swerve at all.  If there is a clump of nuclear scientists and a clump of drug dealers, it's just going to run over whoever requires less swerving, it won't know or care who they are or their occupation.

Edited by SomeGuy123
Link to comment
Share on other sites

Though personally I am on the side of delcaring that people are injecting a false ethical dilemma into a situation devoid of any actual dilemma, as a robotics engineer I SHOULD point out that some of the claims here are a bit false.

First off, self driving cars can ALREADY determine that a given "obstacle" falls into the "Human" class and if the programmers decide to have the car react to an obstacle-human than an obstacle-car, then they can easily do that. Exactly what the car should do with that information is largely already settled in the legal arena. If a robot is in the middle of a given action and it puts a humans life at risk, then it needs to cease this action, EXCEPT in cases where ceasing this action puts another human's life at risk. At that point you need to have some system that makes a decision on which action to take. With rare exception that tie-breaker can be pretty much whatever you want (IE: You cannot have it designed to MAXIMIZE casualties, but as long as you show some sort of consistent logic, it tends to be legally fine.). In the case of an obstacle-human, you COULD (but are not strictly required to) have the system intentionally undergo as gentle a crash as it is capable of doing in order to dodge that person. The meta-reasoning there being that the person(s) in the car are inside of a safety device and have a greater chance of surviving. HOWEVER! Do NOT mistake this line of reason for something that the car is doing, because it is not. The car has a logic tree that basically says "Oh, a human is in the way, look for a path around it, no non-crash path found, find safe crash-path, execute". At no point did it weigh its ethical/moral options, the humans made their decisions in that regard when they programmed it and the car just follows its logic tree like an obedient puppy.

Similarly, the car's ability to detect human occupants in other cars is very situationally dependent on what sensors it has, the lighting conditions, window tints, etc. Chances are quite high that in an absence of smartcar-information-sharing, the self driving car will simply say "That is an obstacle-car-van, avoid if possible, if not, attempt to crash in the following ways for that type of car.". It is certainly capable of saying "That is an obstacle-school_bus, AVOID AT ALL COSTS." if the car's manufacturers chose for it to do that.

Do not mistake the developments over the next 10 years in SD-car tech to be something that actually presents us with these "ethical" questions, because they dont. Past that, it depends on how truly pervasive the technology gets, but those sorts of "ethical" questions really quickly become analogs for other similar situations that people don't tend to have, so I don't anticipate much in regard to all of that, especially because as has almost always been true, what will be done is what is found to be legal. Legal does not always mean ethical.

Link to comment
Share on other sites

4 hours ago, Mazon Del said:

The car has a logic tree that basically says "Oh, a human is in the way, look for a path around it, no non-crash path found, find safe crash-path, execute". At no point did it weigh its ethical/moral options, the humans made their decisions in that regard when they programmed it and the car just follows its logic tree like an obedient puppy.

I am pretty sure the video is not saying that the car has ethics or moral dilemma here. It is the programmer who has to think about how they should design its logic tree to minimize harm in an ethical way, just as you said. And it also ask who should get to make this kind of decisions: the programmers themselves, the corporations making the cars, or the government.

Link to comment
Share on other sites

When you are watching a cable TV, you don't own TV channel.
You own a piece of glass and non-exlusive rights to use the channel and its client software.

When you a driving sitting in a robo-car, you are to own a piece of metal and non-exlusive rights to use the (hypothetical) traffic channel and its client software.
Nobody cares, whether it's you it this exact robocar, or not you.
Not the car must choose between "my beloved Master and other miserable people", but the traffic Skynet must minimize casualties.
At least because otherwise robo-Caterpillar would always be on top.
 

Edited by kerbiloid
Link to comment
Share on other sites

This isn't how you design a reliable system.  The machine vision and logic trees and all the hardware associated with finding where to steer the car, then the subsystems below that that implement the decisions, should be isolated systems.  They should not depend on radio links to other cars or anything else to work.  There could and should be a GPS link and a data link to the internet to get suggested route information but none of the decisions made by the car should be directly affected by radio signals sent by other cars.

Firmware bugs should only be patchable if the new firmware is signed by the manufacturer and the car is not in operation.  Frankly, even this seems like a vulnerability to me.

So, no.  It won't have a radio link to a school bus to know it's a school bus because that's a security risk.

Edited by SomeGuy123
Link to comment
Share on other sites

2 hours ago, RainDreamer said:

Would consumers buy cars that follow traffic Skynet which might kill them over the safety of others?

Would they go by a bus which is driven by unknown person maybe sleeping 4 hours per night and eating antidepressants?

Also, again, if a robo-Caterpillar decides to save its driver by crashing to a bus instead of falling from a bridge, I'm afraid they would still depend on a computer's decision.
Just this would be not their own computer and not they had allowed it to rule their lifes, but a Caterpillar's owner.

 

2 hours ago, SomeGuy123 said:

This isn't how you design a reliable system.  The machine vision and logic trees and all the hardware associated with finding where to steer the car, then the subsystems below that that implement the decisions, should be isolated systems.  They should not depend on radio links to other cars or anything else to work.  There could and should be a GPS link and a data link to the internet to get suggested route information but none of the decisions made by the car should be directly affected by radio signals sent by other cars.

Firmware bugs should only be patchable if the new firmware is signed by the manufacturer and the car is not in operation.  Frankly, even this seems like a vulnerability to me.

So, no.  It won't have a radio link to a school bus to know it's a school bus because that's a security risk.


And these are isolated systems. Normally a car runs on its own, it's not a remote-controlled toy car. In normal situations in makes decision without Skynet.
Also it reports every millisecond its position and status to Skynet, as all cars around it.
But in case when the inner computer "needs to make an ethical choice" it doesn't "take an ethical choice".
It delegates the decision to a disinterested arbiter - i.e. Skynet/MasterControl.

So, the disinterested arbiter decides: to crash a two-person bolid or a school bus. No "my" cars.

Edited by kerbiloid
Link to comment
Share on other sites

A data link to "skynet" is a direct path for malware and for software errors in the skynet network to crash the car.  It's a bad engineering decision.  It also enormously raises the complexity of the system.  Latency in communication, dropped packets, signal losses, RF shadows created by bridges and other cars and countless other things - just bad all around.  

You don't make a reliable system by making it more complex than it has to be.  You try to make it out of dead on reliable, work every single time subsystems and the interactions between those subsystems are as simple as they can possibly be. 

The core of an automated car is "pick the least bad path out of what it can see in front of it".  By "least bad", it's just a weighting system, calculated in realtime from what the car's sensors can see compared to stored information in memory.  That's it.  Learning is turned off for deployed automated cars - only the test models run learning, so the deployed ones people own always act the same and don't "learn" bad habits.

A second system predicts the possible paths the vehicle is capable of making - based on what systems are still working, vehicle speed, traction, current brake wear, everything it knows about - and it feeds these paths to the planner that chooses them. 

Some day we might have the technology to make "skynet" reliable but this decade or the next isn't that day.

Edited by SomeGuy123
Link to comment
Share on other sites

Data link to Skynet cannot crash the car because Skynet doesn't control the car.
It collects info from all the cars on the road, manages traffic-lights (as now), provides the board computers with information about traffic bottle-necks (as now) and other traffic events, makes real-time photos of traffic incidents and violations (also as now, but probably using also the nearest board video registrators - which already are on every car, but yet not online).
It doesn't manage the car "low-level" movement.
I.e. it does all the same what it does now, except of: all cars permanently report about their positions and maybe passengers, and this is one service system instead of several as now.

And - in a rare case when a robo-car is put into a situation when it cannot evaluate a safe route, it will ask the Big Bro to describe the situation watching with eagle's eyes and to compute how to avoid collision or minimize the effect.
Big Bro (Skynet, MasterControl) which can see the situation in whole honestly tries to compute the route with minimal possible damages and casualties.
It doesn't care which of the cars asked this question, it just tries to save all around the possible crash point.
So, it computes the route(s) for the car(s) and sends them an imperative command: "Broadcast Command: Emergency Protocol Enabled. Lock the manual override.", "Caterpillar - move left", "School bus - break", "Sportcar - speed up, keep forward, enable safety balloons". It won't drive the cars, the board computers still will. But according to the script sent from Big Bro. When all is finished those of the cars who is still on the road return to their normal management.

In any other case you will get a competition between car owners whose car is more tough and a battle between robo-cars each of which will try to crash others saving its own passenger.
"Ethical way" will be a "Carmageddon way".

Edited by kerbiloid
Link to comment
Share on other sites

I honestly don't really see how a car making such decision is any worse than the driver, who might not even have the time to notice most of these factors. The possible damage shouldn't be any worse.

And if the question boils down to whom to really blame for the accident, the autopilot can at least ensure that it was keeping the safe distance. And if it really comes to cargo falling out of a truck as the source of the situation that lead to the accident, then the ones who loaded the truck are most likely to have violated the safety rules with such a consequence.

And if we have most cars driven by the AI, then it's possible to make the emergency change of lane without crashing by coordinating emergency maneuvers of multiple vehicles.

Link to comment
Share on other sites

3 hours ago, kerbiloid said:

Data link to Skynet cannot crash the car because Skynet doesn't control the car.
It collects info from all the cars on the road, manages traffic-lights (as now), provides the board computers with information about traffic bottle-necks (as now) and other traffic events, makes real-time photos of traffic incidents and violations (also as now, but probably using also the nearest board video registrators - which already are on every car, but yet not online).
It doesn't manage the car "low-level" movement.
I.e. it does all the same what it does now, except of: all cars permanently report about their positions and maybe passengers, and this is one service system instead of several as now.

And - in a rare case when a robo-car is put into a situation when it cannot evaluate a safe route, it will ask the Big Bro to describe the situation watching with eagle's eyes and to compute how to avoid collision or minimize the effect.
Big Bro (Skynet, MasterControl) which can see the situation in whole honestly tries to compute the route with minimal possible damages and casualties.
It doesn't care which of the cars asked this question, it just tries to save all around the possible crash point.
So, it computes the route(s) for the car(s) and sends them an imperative command: "Broadcast Command: Emergency Protocol Enabled. Lock the manual override.", "Caterpillar - move left", "School bus - break", "Sportcar - speed up, keep forward, enable safety balloons". It won't drive the cars, the board computers still will. But according to the script sent from Big Bro. When all is finished those of the cars who is still on the road return to their normal management.

In any other case you will get a competition between car owners whose car is more tough and a battle between robo-cars each of which will try to crash others saving its own passenger.
"Ethical way" will be a "Carmageddon way".

Agree about communication, one interesting feature with robot cars is that you could create an virtual train by running them fender to fender.
This will let you have more cars on the road and keep the fuel use down because of less air resistance. 
It will also get real time information about queues and other issues. 
An braking warning as an extension to the braking light would be nice, as you would have an extra signal for emergency braking.

Problem is that accessing the cloud will not help much in an accident situation, how updated is the extra data, how much delay do you get communicating.
Note that hitting an buss will just generate material damage on buss unless you also are an heavy truck. 


First instinct is to brake, this will also give you an good estimate on distance to stop, if distance is to short to avoid an crash system start to think of fallbacks, First fall back is to change lane who also might resolve the issue nicely, second ditch the car if possible. if not take the impact, Yes its an few settings where impacting something else will give less overall damage but this mostly fall under ditching under bad conditions, hitting another car head on will be an worse option under all but very hypothetical settings. 

Link to comment
Share on other sites

Car on car accidents wouldn't happen in a skynet world.  The cars would all be communicating and would be able to 'see' possible accidents far into the future.  In fact, the paths of all cars would probably be plotted for their entire journey and set up so that they never get into collision likely positions.  The only problems would be pedestrians not following laws (jay walking) and if that were to  happen all cars behind the front car would also immediately brake and prepare for the front car's evasive actions.  In a case where a pedestrian is run over and killed, it would be 100% their fault because they shouldn't be in the middle of a road at that time.

 

For example, if a bus and a car are going towards and intersection on an intersect path the computer would realize this very quickly and one would slow down immediately to prevent the accident.  Also, the computer would know where the best place to hit a car is.  For example, the car might aim for the rearmost for front most section of a car if an accident is imminent to lower the chance of causalities.  These would all be vastly better than humans.

Link to comment
Share on other sites

The scenario in the original video, of a loose load falling, still applies. Mechanical failures can also occur, as can errors in the driving software.

As for pedestrians, attitudes vary by location. In Britain "jaywalking" isn't a thing, and as a pedestrian I may and do cross the majority of roads where I see fit. As a driver I have to be aware that people do this, which means I should pay attention to what people on the pavement are doing. Now of course if somebody suddenly jumps in front of my car with no warning it's their fault and there's not much that can be done, but it highlights the need for a self-driving car to observe its environment and not over-rely on communicating with other vehicles.

Link to comment
Share on other sites

8 hours ago, kerbiloid said:

Would they go by a bus which is driven by unknown person maybe sleeping 4 hours per night and eating antidepressants?

I'd rather have them eating antidepressants than not. It means they're trying to deal with an issue constructively, and want to live.

Link to comment
Share on other sites

9 hours ago, kerbiloid said:

Would they go by a bus which is driven by unknown person maybe sleeping 4 hours per night and eating antidepressants?

They might buy cars from another manufacturer that decide their car owners get more priority instead, for example. Or just drive their own car manually as always. It might change the way self driving cars on the market will be designed.

Link to comment
Share on other sites

Mechanical failures and the like are certainly still a thing, and this is why having a logging system for when you get your car its necessary maintenance will become even more necessary in the near future. As we inevitably get more situations where the manufacturer is at fault for defective parts, initially the blame will be attempted to be pushed onto the owner. If the owner can show that they got the car its mandated regular maintenance and that they were not the driver, then the data can show that nothing the owner did put the car at any additional risk, so the two avenues to pursue legally are now the manufacturer or the shop that did the maintenance.

And for those that are declaring that the cars should be super air-gapped with no communications, etc etc. All of the self drive engineers and companies are drooling over the various features that become possible once all the cars on the road are networked together and sharing data about conditions internal and external to each other. Like the "car train" previously mentioned, where the cars are all drafting off of one another. Route condition info, coordinated crash response (You are going to crash, so let me try to move THIS way to clear THAT way for you to crash more safely.), etc. Furthermore, as the humans end up having less to do making the car go, they are going to want more to entertain themselves. This means data transmission, radios from the cars with greater signal strength and bandwidth than their smartphone. As we have seen in even major airlines, why have two radios for the same job? (I am specifically referencing how that guy was able to connect to the wifi router and see [but not edit, it was an "output only" stream of data] engine status data). So the market forces are oriented towards MORE communication, the more communication you have, the greater chances of holes.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...