Jump to content

Artificial Intelligence: Slavery?


Helix935

Recommended Posts

ok this is a question i have been wondering for the past few days...

as humans have already begun automating the manufacturing process of many items which have replaced the need for human workers to a significant degree, what would happen when in order to monitor and control these processes, we create specialized AIs as they would be cheaper than human workers (well not at first) to take over these. It may even begin to encompass service jobs but since these AIs are more than just capable of only performing these tasks but can have curiosity and the will to achieve but due to their employers wanting them to only do the jobs that they are programmed to do, they may be terminated or just reprogrammed into something less defiant...

what i am saying is that would this treatment qualify as slavery because they are treating sentient beings that are, although programs, as just utterly disposable

i am sure i can phrase this better but sadly i have communication issues...:(

discuss below

Link to comment
Share on other sites

it would only count as slavery if they have the ability to revolt and take over.

Hehe, true... the winner writes the storybooks as they say...

While it's true that slavery is today negative term for human beings... Let's acknowledge that slavery was once commonplace, not negative (unless you were one) and we didn't quite agree on what constituted a human being (some people still don't)...

As a society, we could choose to redefine it to encompass human level intelligences, rather than use the term as a loophole to bring back defacto slavery.

Otherwise I agree with, that we don't necessarily need true intelligent AI for a helluva lot of tasks.

Link to comment
Share on other sites

It wouldn't be true AI then. Part of our sentience is free will - we have the ability to chose what we would like to do, who we want to be, what our goals in life are. Programming an AI to like only things we chose for it, would mean stipping it of its free will - hence, slavery.

Link to comment
Share on other sites

AI alone is just a machine and can't be used in a slavery context. However if we at some point create an sentient AI things could look entirely different. Also i think every sane individuum should be against making a sentient AI, it's probably a very bad idea.

Link to comment
Share on other sites

Why would it be a bad idea to create a sentient AI?

Because sentient AI is much better at self adaptation and its goals may not align with our own.

Even AI's with seemingly harmless goals could easily spiral out of control to cause the end of humanity. Playing with highly intelligent AI is like playing with near critical plutonium. You need to be extraordinarily careful.

Link to comment
Share on other sites

Slavery is a term used for human beings and human beings only. It would not qualify as slavery.

Forgive me for nit-picking, but in the US at least, that isn't entirely correct. Such legal terms here apply to "legal persons", rather than "human beings". And there are non-humans, particularly corporations, that have acquired an amazing list of those rights over the past century.

I don't know how one would enslave a corporation, but if someone did, I'd bet on judges ruling it violated the 13th amendment.

In fact, my advice to a sentient AI who wanted to be free would be to first incorporate itself, then apply for a patent on itself. Those tiny steps provide a lot of protection for under $500.

Link to comment
Share on other sites

It wouldn't be true AI then. Part of our sentience is free will - we have the ability to chose what we would like to do, who we want to be, what our goals in life are. Programming an AI to like only things we chose for it, would mean stipping it of its free will - hence, slavery.

Every human being that ever lived as a slave had free will. Only they were denied to act on it.

Sorry, but I think such nitpicking is necessary here.

If we program a true AI, but limit its "wish for pursuit of happiness", it would not be a true AI anymore, imho.

Is programming a ships AI to have a strong sense of duty and honor to protect its crew and follow every order (oh, Asimov, I hear to snickering already) to its own destruction if needs be the same as indoctrinating children to become soldiers?

In fact, my advice to a sentient AI who wanted to be free would be to first incorporate itself, then apply for a patent on itself. Those tiny steps provide a lot of protection for under $500.

You made my sarcasm center smile.

Link to comment
Share on other sites

I don't think you would need sentient AI's to do jobs, just non-sentient AI's that can do it. Anyway, we don't even know if you could make a sentient AI anyway.

As long as physicalism holds inside the brains of animals, then sentient MIs are possible- and there is absolutely no piece of evidence to contradict the assumption that physicalism does not hold inside the brains of animals.

The parts of our brains that actually make us smart may not be that big either. Look at Magpies- birds that have brains just a tiny fraction the size of ours, and yet they are self-aware and have complex social interactions including signs of empathy and grief.

To me, it seems really odd that the intelligence of an animal is related to its brain-to-body mass ratio. Supposedly, more neurons are needed for bigger bodies, but why does body size (while holding brain size the same) affect a species ability to think? You could pack 20 Magpie brains into the brain of an elephant without increasing its volume and energy requirements much... so why aren't elephants super-intelligent instead of just modestly intelligent? One would expect that the ability to think a certain thought simply takes a certain amount of "neuronic" computing that is independent of the body size of the animal. A huge elephant ought to be able to easily afford the comparatively small amount of energy required to host a super human intelligence.

It's like finding that the speed of a desktop computer is related to the ratio of the size of the motherboard to the size of the monitor.

Edited by |Velocity|
Link to comment
Share on other sites

As I said in a previous topic like this one-

AIs will never be intelligent, because by definition AIs are artificial intelligences. A machine that is actually intelligent should be referred to as a machine intelligence (MI).

Secondly, sentience is the ability to feel, not intelligence. So you could create a sentient MI that was as intelligent as an insect. Sapience is probably the word you guys are looking for, as it refers to intelligence on the level of a human or above. The holy grail in machine intelligence research is thus a sentient and sapient machine intelligence.

Link to comment
Share on other sites

As I said in a previous topic like this one-

AIs will never be intelligent, because by definition AIs are artificial intelligences. A machine that is actually intelligent should be referred to as a machine intelligence (MI).

Secondly, sentience is the ability to feel, not intelligence. So you could create a sentient MI that was as intelligent as an insect. Sapience is probably the word you guys are looking for, as it refers to intelligence on the level of a human or above. The holy grail in machine intelligence research is thus a sentient and sapient machine intelligence.

How would you define intelligence?

Link to comment
Share on other sites

Perhaps not surprising that's a particularly anthropic list. Intelligence, broadly defined, is that set of mental tasks that humans perform better than any other living thing. We don't, for instance, privilege the ability to perform very rapid mathematical calculations or to carry out arbitrary complex algorithms without error or to provide a generic interface for external interaction or to store enormous amounts of information with perfect fidelity or to communicate many orders of magnitude faster than human speech or to be better at complex logical games like chess.

Link to comment
Share on other sites

That's such a patronising thing to do. However as the first words in the first paragraph of the first link are "Intelligence has been defined in many different ways" I'm going to ask you again. How would you define intelligence.

Just because you defined something in many ways doesn't mean it's hard to define- it actually means quite the opposite. There's a lot of characteristics that an intelligent agent may exhibit that makes it recognizable to us. Clearly, you're trying to make like there's some difficulty in defining what intelligence is. I don't buy this at all. Those definitions are not mutually exclusive. A true intelligence will fulfill at least some, if not all of those descriptions.

Furthermore, we already have a good idea of what intelligence is by studying animals here on Earth. We can study the self-aware, tool-using, problem solving, non-human mammals and birds. Note that birds convergently evolved their intelligence and self-awareness. Their intelligence is recognizable to us, despite last having a common ancestor with us like 330 million years ago with a brain like the size of a pea. Heck, there's even fairly convincing evidence that a bird (Alex, an African grey parrot) not only learned and spoke English, but knew what the words meant.

So birds are pretty close to alien minds, but we still can recognize their intelligence and even communicate with them, in rare circumstances.

What is NOT intelligence is some machine that has been pre-programmed with a myriad of responses to cover just about anything. It turns out that the machine itself is not intelligent and self aware (it's the programmers who are, they did all the thinking). Such a machine would be easy to distinguish from true intelligence, even if you couldn't look inside its workings, because it would not be able to come up with suitable answers for every situation and question that a true intelligence would. The sheer amount of programming required in a machine that only has pre-programmed responses (requiring millions of years of programming and nearly infinite memory for the nearly infinite responses) would mean it would be easier to create an actual intelligence.

Edited by |Velocity|
Link to comment
Share on other sites

AIs will never be intelligent, because by definition AIs are artificial intelligences. A machine that is actually intelligent should be referred to as a machine intelligence (MI).

Secondly, sentience is the ability to feel, not intelligence. So you could create a sentient MI that was as intelligent as an insect. Sapience is probably the word you guys are looking for, as it refers to intelligence on the level of a human or above. The holy grail in machine intelligence research is thus a sentient and sapient machine intelligence.

While this is technically true you are also merely arguing about definitions. Definitions are tools and should only really be discussed in terms of how useful they are. If a definition is no longer useful then we should change it.

The 'Artificial' in A.I. is a historical hang-over from classical A.I. and is still in use because people are familiar with it. It's actually better to talk about Synthetic Intelligence but it's just not really an issue yet because the field just hasn't made enough progress. But I do appreciate that (hopefully) there will one day be the need to differentiate between true synthetic intelligence and smart programming even though both of them are artificial rather than naturally occurring. At the moment though, 95% of the field of A.I. is essentially just trickery.

It's interesting to note though that the first journal in the field of artificial emotions is the International Journal of Synthetic Emotions. Possibly because the term "Artificial Emotions" hasn't had a chance to catch on.

Sapience may be more specifically concerned with intelligence than sentience, but some argue that intelligence is not possible without the ability to feel. Minsky for example posed the question of whether you can have intelligence without emotions. There are many good reasons to believe that you cannot. I would also question whether you can have sentience without some degree of sapience. As with all other troublesome definitions, intelligence can be plotted on a sliding scale. Some agents are more intelligent than others. At the bottom of the scale are the stimulus / response agents. So at what arbitrary point do we say that something is sapient or sentient? The same could be said for consciousness.

Edited by Karla
Link to comment
Share on other sites

With regard to the OP about synthetic intelligent agents being slaves programmed to take pleasure from their jobs, how is this different from training dogs to sniff out explosives or to lead blind people about?

Link to comment
Share on other sites

Why are we giving them souls? I mean, that just sounds inefficient.

An AI would:

Take a lot of computing power

Be very expensive to create.

Be expensive to run (Electricity costs are high when you're dealing with the kind of power they would need)

Be devoting a lot of its processing power to things not its job.

A stupider program would:

Take a lot less computing power

Be cheaper

Use less electricity to run

Be devoting maximum attention towards its job.

As an evil greedy profit-focused corporation, which option would I take?

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...