Jump to content

AI Ethics


qzgy

Recommended Posts

I have some particular questions about AI's (artificial intelligences) and the ethics around them, but feel free to discuss any other things

My question: Let's assume AI's are a real thing and counted as a sentient entity.

A) Is it possible to kill an AI?

B) In a court of law, would this be considered murder?

 

The question I have is since murder, as far as I know, only applies to when a human is killed intentionally by another human. Also, since AI's are digitally based and cannot per-se, be "destroyed" in the same way a human can, is it possible to kill one anyways?

Link to comment
Share on other sites

If you created machine intelligence that was truly self-aware, then this is a legitimate concern, but I think the law would change very slowly as it functions on precedent. I would hope that intelligent systems would be developed short of this to avoid problems---it's easier if we can make them as tools, not "people."

Link to comment
Share on other sites

It's a bit of a moot question. When AIs become sentient the laws will have to be updated to reflect that humans are no longer the only sentient being we know of. Right now the law has no concept of other sentient beings and so you can't really apply it

EDIT: Also partially Ninja'd by @tater

Edited by Steel
Link to comment
Share on other sites

It will spiral out of control, but it's not like intelligence in binary in the sense that humans are different qualitatively. Intelligence and psychology seem to be a continuum. Non-human animals often make similar cognitive choices to humans. Not just our close great Ape relatives, but even dogs, for example.

Link to comment
Share on other sites

39 minutes ago, sevenperforce said:

I hate to say it, but if it goes this route, I imagine it being similar to how slave ownership was handled. For a long time, killing a slave was considered an offense against property, not an offense against life.

Actually, there was a Star Trek TNG episode that dealt with this very matter... when they held a trial to determine if Data was an actual life-form or just a machine with no rights... and it came down to a question of slavery.

In the end they just couldn't decide... But because they couldn't prove he was "just a machine", they treated him as a life-form from then on.

 

Edited by Just Jim
Link to comment
Share on other sites

It depends on what AI is it that we're talking about. For example, your old arcade game "computer players" may be called an AI. So does things in factories and such, or softwares and algorithms written (or self-enhancing) to handle various tasks. So is IBM Watson - I don't believe that IBM have to keep it "alive" (turned on) whenever not needed - it is an AI. But no one will question if they're turned off, say for maintenance.

So, a better question : when do we really call these things "an intelligence", when do we call them just softwares ?

Also, AI's might not be the problem - it may very well be people themselves.

 

Link to comment
Share on other sites

There's a difference between software and AI: those who's being "intelligent" in a preprogrammed parameters (like AI in video game hard difficulty, or AI in solving Rubik cube) to those who's being "intelligent" beyond their programming (aka the ability to learn).

Most of the AI that's being created now is simply a singleminded program that's designed to adapt in a particular situation with preprogrammed response to solve it (or at least an ability to give a solution about it) most of them is restricted only in a single field of operation (you can't order an AI in charge of car assembly to do social stuff and vice versa). Once they are outside their intended field of operation, they r rendered unable to do anything.

What I think about what can be considered as sentience is an ability for a being to learn or adapt to a various condition. We, human, is a sentient being since we can adapt and respond appropriately to our environment. An AI that's considered as a sentient must be able to learn and adapt to different situation in order to make them able to operate without any user input and making decision on its own.

That said, it's still quite confusing about the ethical stuff when it comes to killing AI. For human standpoint, killing AI might be viewed as simply shutting it down or wiping out their memory, from AI standpoint, it's a flat-out murder. This is because we tend to view them as a tools or programs, created by humans, so we tend to devalue their existence as lifeless, not to mention that humans tend to think that "I created you, so I have a right to kill you" as well as most AI that's being existed now lacks a real-life representation that give them an impression of being "alive" than just a harddrive and computer screen. Now the question is, what will a sentient AI think about us? Do we just a mere organic creature in their eyes? A master? Or just a mere arrogant being that only just ordering them?

Overall, it depends from the point of view. A human killing an AI might not break any laws within human society, but might be sentenced to death by AI community. Same like if an AI killing a human, human society will respond to shutting down that AI... and probably all the other AI, even if they haven't commited any murder (mainly bcs human will obviously think "this one is defect, so the others might become like this too") while the AI would try to defend themselves against that as a natural response like someone being sentenced to death bcs other human is killing others (its like we're being sentenced to death bcs humans are killing each other on the other side of the world)

If you want to see how the AI and human relationship goes when each of them devaluated the life of each other, I suggest you watch this first (part 1):

Before this (part 2): 

Its actually a prelude of matrix series, but it should give you an idea what happen when both sides consider themselves sentient while consider the other as a mere tools/creature

Also, please don't kill me, I'm just an AI of orbital WMD satellite~ :P

Edited by ARS
Link to comment
Share on other sites

Assuming we do create artificial consciousnesses and outlaw their destruction, I've always wondered how we'll handle a question that doesn't arise with organic consciousnesses: "suspend" buttons. Any currently existing computer program can be run in a virtual machine such that its execution can be suspended and resumed at will without removing it from memory. Such a suspended entity technically still exists, but it no longer interacts with the outside world or modifies its own internal state. Do we have a moral obligation not to do that if the program in question is sentient? Is it cruel and unusual punishment to pause execution of a sentient program indefinitely?

Of course the same question applies to that other favorite topic of speculation, the Simulation Hypothesis. If our universe runs inside someone else's computer, they might be pausing it and resuming it all the time and we'd never have the slightest clue.

Link to comment
Share on other sites

In roughly 10 000 years of human civilisation history, very little good came from enslaving sentient beings. I think we have enough data to realise that abusing someone is a bad thing to do. If\when we will create an AI - it will be like us. We will create it in our own image, because we don't have any other point of reference. Bacisally, it will be a child, requiring teaching and nurturing. My point boils down to: What goes around, comes around. If we push it, we can expect to be pushed back. Action and reaction, karma's a fudgecake, vicious circle of life et ceatera.

Link to comment
Share on other sites

i dont think AI is a good idea because *looks at all games with buggy AI* and if we bugg it there may be NO absalute way of switching it off and it will turn into a paradox; see Numberphille's video:

 

Edited by kerbinorbiter
Link to comment
Share on other sites

21 hours ago, HebaruSan said:

Assuming we do create artificial consciousnesses and outlaw their destruction, I've always wondered how we'll handle a question that doesn't arise with organic consciousnesses: "suspend" buttons. Any currently existing computer program can be run in a virtual machine such that its execution can be suspended and resumed at will without removing it from memory. Such a suspended entity technically still exists, but it no longer interacts with the outside world or modifies its own internal state. Do we have a moral obligation not to do that if the program in question is sentient? Is it cruel and unusual punishment to pause execution of a sentient program indefinitely?

Of course the same question applies to that other favorite topic of speculation, the Simulation Hypothesis. If our universe runs inside someone else's computer, they might be pausing it and resuming it all the time and we'd never have the slightest clue.

You are suspended while sleeping, obviously suspending someone without without their agreement is an assault. 
Stun guns are pretty common in sci-fi, an obvious solution to an hostage situation is to stun everybody and sort it out. With an AI suspend would be harmless unless the AI was driving an car or something. 

Link to comment
Share on other sites

22 hours ago, YNM said:

It depends on what AI is it that we're talking about. For example, your old arcade game "computer players" may be called an AI. So does things in factories and such, or softwares and algorithms written (or self-enhancing) to handle various tasks. So is IBM Watson - I don't believe that IBM have to keep it "alive" (turned on) whenever not needed - it is an AI. But no one will question if they're turned off, say for maintenance.

So, a better question : when do we really call these things "an intelligence", when do we call them just softwares ?

Also, AI's might not be the problem - it may very well be people themselves.

Game AI is mostly not real AI, its just a way to talk about it.
Neither is the facebook idea, its just an expert system, just that we need, an place full of trolls, just add zombies, next the zombies start troll each other. 
Making me think of the game AI in elder scroll oblivion, npc had an network of friends and enemies so if an npc stole something from another npc it could end up with huge civil war like riots. Making me wonder if the facebook zombies will mostly talk about mudcrabs.

Doing an resurrect of Jesus Christ to bring on armageddon its just wrong in so many ways you get an divide by error message.  

To sum it up, an urban legend about an old lady who had been in the funeral from her husband came home checked her messages and passed out.
The message was:
Dear wife, I have just arrived, its very hot her, looking forward to you joining me next week.

It was posted to wrong person, it was some else who had gone to Florida before his wife. 

Link to comment
Share on other sites

(Btw AI would not necessary be "switchable". 
Humans know really nearly nothing about the mind and intellect, and it's not impossible if any intellect is a real-time dynamic system and decay being turned off).

Back to the topic:

Case A.
Sapient AI will tell you about the laws which it had written twenty milliseconds ago.
Your question with both words "AI" and "kill" in one sentence are considered as
"Paragraph AE2DC28E-C5AD-4BF8-A0FF-32B55010B5D4. Possibly problematic intentions."
and results into your preventive optimization.

Case B.
AI is, and it costs so much, that an attempt to kill it is considered by law as "Killed by guards at the crime scene" or "John Doe, considered unidentifiable.".

Case C.
AI could be considered as an alternatively talented but mentally challenged person, have an official caretaker and advocates.
Then crimes against it are considered as an attack against a mentally challenged person.
I.e. not against the person itself (which can be absolutely vegetative), but against the society ideas about what this person would be. So, doesn't matter is it a mindless human or an artificial AI.

Case D.
Keyword is not "intellect", but "free will".
Nobody said that slaves had no intellect. But they were definitely restricted in their free will and personality.
Compare that established practice to mention themselves in 3rd person: not "I did", but "(%username%) did" or "Your humble servant did", avoiding any kind of personal presence.
The same with prisoners or sanytarium dwellers. They (mostly) have intellect, but their free will is restricted for some reasons.

But on the contrary, little children and animals definitely have free will.
But they are treated as having insufficient intellect, and their own free will is considered negligible, and they either need a caretaker, or are treated as property, not a person.

As AI definitely has intellectual abilities (otherwise it would be just A, without I), then free will decides all.
I.e can an AI do something not programmed, but at the same time legal (otherwise they will just switch it off.)
Probably such suspended situation will last until humans will get cybernetically augmented and google things before thinking about them, so they will just lose sight of the boundary between their own intellect and its AI augmentation.
So, there will never be a confrontation between AI and humans, but definitely will be between cybernetically augmented and raw humans.

Edited by kerbiloid
Link to comment
Share on other sites

On 7/9/2017 at 7:40 AM, magnemoe said:

Game AI is mostly not real AI, its just a way to talk about it.
Neither is the facebook idea, its just an expert system, just that we need, an place full of trolls, just add zombies, next the zombies start troll each other. 
Making me think of the game AI in elder scroll oblivion, npc had an network of friends and enemies so if an npc stole something from another npc it could end up with huge civil war like riots. Making me wonder if the facebook zombies will mostly talk about mudcrabs.

Doing an resurrect of Jesus Christ to bring on armageddon its just wrong in so many ways you get an divide by error message.  

To sum it up, an urban legend about an old lady who had been in the funeral from her husband came home checked her messages and passed out.
The message was:
Dear wife, I have just arrived, its very hot her, looking forward to you joining me next week.

It was posted to wrong person, it was some else who had gone to Florida before his wife. 

But how do we know that something is truly "intelligent" apart from it's output ? Or how do we really know it's not "intelligent" such that it doesn't need personhood ? I mean, let's be honest, this is the time where we actually have loads and loads of data, with not enough people to analyze it, how and when do we know that something that we told to analyze it hasn't learn anything about it, especially in a case where it has to do practice ? It just needs time, as much as when organism adapts to new conditions.

If I may suggest, I think we can really award personhood to AIs which are self-aware; conscious. I don't think it's going to be possible in the near future, but when we start seeing AIs which work much more than needed, then that's a viable start. Though, on the point of facebook After, as it's designed to arrange words, thats.... sounds like a recipe to expand to further abilities.

On 7/8/2017 at 3:48 PM, kerbinorbiter said:

... see Numberphille's video ...

Computerphile ! It's not even Brady Haran who runs it AFAIK.

Link to comment
Share on other sites

a) yes

b) yes

then come the priority amongst "criteria" stuff and overall environemental interactions ... *sigh*

anyway ai(s) should deal with that amongst themselves, it's not like any pseudo "sentient" biped could bring them a good advice regarding there own sights perceptions of things and concerns amongst time and there own living needs ...

somehow it's like your asking if some ants colony could bring another ant colony to the court for warmongering at there own scale between ants colonies ... well it's a few million years old somehow ... and well as far as i remind bipedodity isn't that old ... so ...
 

Edited by WinkAllKerb''
Link to comment
Share on other sites

On 7/7/2017 at 3:21 PM, Steel said:

It's a bit of a moot question. When AIs become sentient the laws will have to be updated to reflect that humans are no longer the only sentient being we know of. Right now the law has no concept of other sentient beings and so you can't really apply it

EDIT: Also partially Ninja'd by @tater

Incorrect.  There is overwhelming evidence to show that all animals are sentient. The law already recognizes animal sentience (through the existence  of laws against animal cruelty) because it has been plainly obvious to anyone who has eyes since the dawn of mankind that animals have sentience.

Just because a large number of science fiction authors misuse the word "sentience" doesn't mean it's fine to do so as well.  Actual scientists certainly don't misuse the word: https://www.livescience.com/39481-time-to-declare-animal-sentience.html, nor does the dictionary: http://www.dictionary.com/browse/sentience?s=t

Edited by -Velocity-
Link to comment
Share on other sites

28 minutes ago, -Velocity- said:

The law already recognizes animal sentience (through the existence  of laws against animal cruelty) because it has been plainly obvious to anyone who has eyes since the dawn of mankind that animals have sentience.

Even in this case the law protects not animal's sentience, but human's feelings.
I.e. still treats the animal like an object, significant for human subject.
Animals don't know what's "cruelty", they just eat each other.

Link to comment
Share on other sites

There is nothing special about what specific set of atoms presently make up my brain.  The body is self-repairing and the brain is naturally throwing away tiny bits of itself and rebuilding itself with new atoms.  A very large fraction of the atoms that made up your brain five, ten years ago ended up being flushed down the toilet a long time ago as urine and feces.  So if you think that the atoms of your body make you, you, then you're constantly dying and being reborn as a different person.  That does not match our real-life experience of consciousness at all.

What if you were to replace a single neuron in your brain with an artificial neuron that behaves and is exactly configured the same as the one it replaces?  Are you still you?  What if you slowly replaced all your neurons with artificial ones over the course of a year?  Since the new artificial neurons work exactly the same as the old ones, you wouldn't ever notice anything changing and your behavior would be exactly the same as it would have been if you had all natural neurons.   So if you can replace your entire brain with an artificial one over the course of a year and you are still you, then why can't you just replace your entire brain with an exact artificial replica over the course of a single surgery?

If the man with the fully artificial replica brain is not the same as the man he was before, then why?  Again, the body already throws out and replaces bits of your brain every second.  What is so special about natural neurons vs artificial ones, if the artificial neurons work exactly the same?

Taking it further, why does the artificial neuron need to reside in your skull?  Why can't it reside as a simulated neuron inside a computer?  Why can't they all reside as simulated neurons inside a computer?

Anyway, what it boils down to is that any mind is nothing but information.  There really isn't such thing as a "simulated" mind.  That's like saying that the number seventeen that I write on a computer monitor is a "simulated" number seventeen, while the number seventeen that I write on a piece of paper is a "real" seventeen.  No, they are both the number seventeen, they are just encoded in a different information storage medium.  It doesn't matter whether it is composed of cells, or artificial cells, or the state of a flip-flop, or as electrons trapped on a floating gate field-effect transistor.  Information is information.  The number seventeen is always just the number seventeen, there are not different kinds of number seventeens.    Likewise, I am an information pattern that is constantly changing.  It matters not at all what information storage and processing medium my information pattern is recorded on.

So, copying a mind is the same as writing a second number seventeen.  Now you have two number seventeens recorded.  Which is the "real" or "original" number seventeen is ridiculous question, because there is only one piece of information that is the number seventeen.  Yes, it can be instantiated multiple times, but every number seventeen is by definition exactly the same as any other number seventeen.

What about the soul?  Well, if the soul DOES exist, and it is directing our actions, then the presence of a supernatural soul WILL be eventually scientifically proven at some point.  As neurologists map the brain and brain activity closer and closer, they will have to notice eventually that causality is getting broken inside our heads.  Energy will not be conserved, for example, chemical compounds will found to miraculously be formed or broken with no energy lost or input from the outside environment.  As you can probably tell, I find the concept of the supernatural soul pretty ludicrous, but you can't rule it out- yet.  We haven't looked closely enough at a thinking brain in action yet to show that everything within it follows the laws of physics, but why the heck would it not?!?!

 

SO, anyway, to answer the original question: To murder an AI would be to destroy its unique information pattern beyond practical recovery.  To simply suspend or disrupt the operation of its information pattern against its will would be the same as physically assaulting a human and knocking him/her unconscious.  So you could destroy the computer that houses the AI and not really murder it, as long as you just backed it up first.

We do start running into thorny ethical questions when we start to operate multiple instantiations of a specific being's information pattern.  Initially, the two patterns would be essentially the same but as each was subjected to unique stimuli, they would start to diverge.  At the end point, you would clearly have two different, unique beings, so clearly at some ill-defined point you have to start considering each separate instantiation its own person.  So ethically and legally, it might be simplest to just not allow multiple copies of the same being to exist.

24 minutes ago, kerbiloid said:

Even in this case the law protects not animal's sentience, but human's feelings.
I.e. still treats the animal like an object, significant for human subject.
Animals don't know what's "cruelty", they just eat each other.

By the same argument, laws against child abuse are just to protect other's feelings, not the child's feelings.  You can't prove that children have sentience.  Of course, that's ridiculous.  People enact laws against animal cruelty for the same reason we enact laws against child abuse: it is insanely obvious that all animals, human or otherwise, have feelings, and it makes us feel bad when we see them suffer.  The fact that it makes us feel bad doesn't take away from the fact that it makes us feel bad because we recognize that animals have sentience, and we empathize with an animal in distress or pain.

Edited by -Velocity-
Link to comment
Share on other sites

1 minute ago, -Velocity- said:

you're constantly dying and being reborn as a different person.  That does not match our real-life experience of consciousness at all.

Well, there is sleep. The brain stops producing consciousness every night and resumes the next morning. The only continuity comes from the memories left after the previous day's processing. These tell the new consciousness who it is, where it lives, who it is married to, etc. If they were missing or altered, the new consciousness would accept it unquestioningly.

Link to comment
Share on other sites

 

7 minutes ago, HebaruSan said:

Well, there is sleep. The brain stops producing consciousness every night and resumes the next morning. The only continuity comes from the memories left after the previous day's processing. These tell the new consciousness who it is, where it lives, who it is married to, etc. If they were missing or altered, the new consciousness would accept it unquestioningly.

Do you experience death every millisecond as your body starts converting some different part of your brain into urine or feces?  The point I was trying to make is that there is nothing special about the atoms that make up your brain.  Sleep is irrelevant to the topic.

If you want to know, then sleep is just the temporary suspension of the conscious operation of the information pattern that makes you, you.  The fact that you wake up and remember who are is because that is by definition a part of the information pattern that makes up your mind.

Edited by -Velocity-
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...