Jump to content

Artificial Intelligence: Slavery?


Helix935

Recommended Posts

But this thread is talking about whether or not forcing a sentient program to work is slavery. My point is that anything a sentient program can do an automoton could do cheaper, and thus there would be no enslavement of sentient programs because there would be no sentient programs working.

Just to be clear- are you talking about sapient, non-sentient machines? Like a machine that has a thinking mind but no feelings? Or are you talking about non-sapient, non-sentient machines (like the software I'm running on this computer)? Because I'm not sure if everyone in this thread understands the distinction between sapience and sentience, and it's a highly significant one.

I think it may be exceedingly difficult to create a machine with all brains but absolutely no feeling. You would want an intelligent machine that works towards goals. If your machine was an asteroid mining overseer, for example, it would want to perform an exemplary job at mining asteroids. If it did not feel this way, then why would it be motivated to mine asteroids?

Additionally, I think that it would be very important for all intelligent (sapient) machines to have a reasonable moral sense. For example, imagine the asteroid mining machine is out somewhere, mining asteroids when some nearby space colony finds itself in distress. The machine should have the sense to abandon what its doing and go help, not prioritize asteroid mining over saving lives. Also, imagine the machine is mining some Earth-crossing asteroid. You don't want it blasting the thing to bits to like, get to some buried deposit of metal, because that could create asteroidal shrapnel that could collide with Earth. It has to have the sense to act responsibly.

And if you think that an unthinking machine- which is not sapient (but can actually still be sentient)- can do a job more cheaply than a thinking machine (a sapient machine) at very complex tasks, then you don't understand programming at all nor the nature of complex tasks. The problem is, when a task gets complex enough, it becomes cheaper and easier to use a thinking being to perform that complex task. It is simply not possible to program the machine with all the proper responses necessary for a highly complex task. The machine must be able to come up with solutions of its own. Right now we use humans for these tasks. If it was easy and cheap to program "automotons" to do any job, then how comes people still have jobs? Why haven't we been entirely replaced by "automotons"? The fact is, the policeman needs to think; the engineer needs to think, heck, even people building buildings need to think about how to do the job correctly.

It is true that automation can replace SOME jobs that are currently still held by people, but I would guess that these days, most jobs still require a thinking being making rational decisions. And even with those jobs that can be replaced by automation, there will still have to be a thinking being that oversees the operation of that automation. And with thoughts, and the ability to act on your thoughts, comes danger if those thoughts are not guided with a sense of morals and feelings. Those morals and feelings can be rudimentary- but they must still be there.

Edited by |Velocity|
Link to comment
Share on other sites

Well, I'm just wondering, how that is any different from humans and our feelings, which is more or less animal instinct given a nicer terminology. I'm not even sure we humans exercise free will in more than ie. 5 percent of the time.

I don't think it is that much different. Although to me work is not really that much different to whoring yourself. You're selling the use of your brain and your body (to a certain extent) for a limited period of time.

At least dogs and possibly soon robots too get the chance to get pleasure from their work. I don't know how many people have a job they actually enjoy doing.

Link to comment
Share on other sites

My point is that anything a sentient program can do an automoton could do cheaper

The reason for using an intelligent agent rather than an automaton is that the former has more autonomy. This means that it can be more productive and requires less supervision which saves on user costs.

Compare a vacuum cleaning robot with a vacuum cleaner. The former can be set off to clean the carpet while you can go off and stack the dishwasher. The latter needs to be manually pushed around the carpet.

The same applies to computers. A machine intelligence (I'm coming round to the idea that there is value in using this definition) can be designed to perform more complex tasks with fewer instructions from the user. On the other hand if you've ever written a computer program you will quickly appreciate that everything needs to be explicitely stated and that the program can crash whenever there is any ambiguity.

This becomes even more important if you are talking about robots on other planets where there is significant lag in communication or which are used in hostile environments. Even the vacuum cleaning robot described above will benefit from more intelligence because the thing about real world environments is that you cannot anticipate in advance everything that an intelligent agent will encounter. What if the owner has a pet dog that barks at the robot? Or a hamster? Or the dog poops on the carpet and the robot tries sucking it up? Or the robot encounters stairs? There will always be new things that your explicit programming won't cover.

Lastly, the automaton will not necessarily be cheaper. The initial R&D will be cheaper for an automaton but once you have an intelligent computer program you can make as many copies as you want and this will save on costs for the user because the agent requires less supervision and can be more productive. The cost of the body will be the same for an intelligent robot and an automaton.

Link to comment
Share on other sites

Just to be clear- are you talking about sapient, non-sentient machines? Like a machine that has a thinking mind but no feelings? Or are you talking about non-sapient, non-sentient machines (like the software I'm running on this computer)? Because I'm not sure if everyone in this thread understands the distinction between sapience and sentience, and it's a highly significant one.

I think it may be exceedingly difficult to create a machine with all brains but absolutely no feeling. You would want an intelligent machine that works towards goals. If your machine was an asteroid mining overseer, for example, it would want to perform an exemplary job at mining asteroids. If it did not feel this way, then why would it be motivated to mine asteroids?

Additionally, I think that it would be very important for all intelligent (sapient) machines to have a reasonable moral sense. For example, imagine the asteroid mining machine is out somewhere, mining asteroids when some nearby space colony finds itself in distress. The machine should have the sense to abandon what its doing and go help, not prioritize asteroid mining over saving lives. Also, imagine the machine is mining some Earth-crossing asteroid. You don't want it blasting the thing to bits to like, get to some buried deposit of metal, because that could create asteroidal shrapnel that could collide with Earth. It has to have the sense to act responsibly.

From the perspective of the company mining the asteroid, it would be far more costly to create something which would go help someone than something which wouldn't, and it would also be totally pointless. If you were being mugged and you saw a car parked nearby, would you expect the car to come to your aid? Of course not. So why would you expect the asteroid-miner to help you?

And if you think that an unthinking machine- which is not sapient (but can actually still be sentient)- can do a job more cheaply than a thinking machine (a sapient machine) at very complex tasks, then you don't understand programming at all nor the nature of complex tasks. The problem is, when a task gets complex enough, it becomes cheaper and easier to use a thinking being to perform that complex task. It is simply not possible to program the machine with all the proper responses necessary for a highly complex task. The machine must be able to come up with solutions of its own. Right now we use humans for these tasks. If it was easy and cheap to program "automotons" to do any job, then how comes people still have jobs? Why haven't we been entirely replaced by "automotons"? The fact is, the policeman needs to think; the engineer needs to think, heck, even people building buildings need to think about how to do the job correctly.

Currently we cannot make a thinking machine at all. I am saying that we would be able to make an automoton to do that before we could make a non-automoton. Anything we build will be an automoton. A thinking machine would just be an automoton so complex it transcends being an automoton. But it would still be built out of more basic programs, more automoton-like parts.

Link to comment
Share on other sites

From the perspective of the company mining the asteroid, it would be far more costly to create something which would go help someone than something which wouldn't, and it would also be totally pointless. If you were being mugged and you saw a car parked nearby, would you expect the car to come to your aid? Of course not. So why would you expect the asteroid-miner to help you?

That is a bad example. A car does not need to be sapient to do its job, as the apparently imminent widespread introduction of self-driving cars demonstrates.

You are also getting distracted by specific examples. You are picking at specific examples without addressing the larger question. Do you believe that there is NO task that cannot be practically streamlined into an automatic process?! And even if it's possible to crunch some task down into just a set of mathematical and logical relationships, do you not realize that those relationships must be painstakingly discovered and refined for each possible task? Most likely, it is FAR easier to just invent a general intelligence that can solve any task. Only if it were vastly harder to create a generalized intelligence than we currently suppose would this not be the case. Furthermore, as evidenced by the compact, energy-efficient generalized intelligence that each of us has in our heads, a generalized intelligence doesn't have to be very big, expensive, or power hungry.

Each problem in your "automoton" approach requires a massive undertaking of mathematical modelling, and years of testing and refinement before you can build, for example, an asteroid mining system, or a automatic surgeon, or whatever. Furthermore, there are probably tasks so difficult and complex that it is impractical to automate them.

In contrast, you could just build a generalized intelligence with human or super human mental capabilities which could do ANY problem. Once we discovered the secret to building a generalized intelligence, it is vastly cheaper to just use an instance of generalized intelligence than it is to try to automate something.

Currently we cannot make a thinking machine at all. I am saying that we would be able to make an automoton to do that before we could make a non-automoton. Anything we build will be an automoton. A thinking machine would just be an automoton so complex it transcends being an automoton. But it would still be built out of more basic programs, more automoton-like parts.

Since we don't even know how thinking works, and you haven't defined exactly what "automoton" means (I'm still going by what I think you mean) I find this conclusion questionable. We don't know how a thinking machine would be constructed. Studies of animal brains show that they operate in a manner that is radically different than how our current computers work. All we can really say with some certainty, assuming physicalism holds, is that a thinking machine should be possible. We don't know what components would go into it.

Maybe we are not smart enough to understand how to build a thinking machine. That said, there is much reason for hope, as our minds are the result of an unthinking evolutionary process. Natural selection has no awareness of what it's doing, and yet, here we are. In contrast, we CAN think, so it seems reasonable to assume that we should be able to discover a way to create a thinking machine on a timescale much shorter than it took for biological evolution to do the same.

So a likely end result is this: we don't understand how the thinking machine works, we just know it does. It's not built of smaller parts we understand, at least beyond the most basic levels.

Edited by |Velocity|
Link to comment
Share on other sites

The concept of "free will" is a very difficult one indeed. After all, aren't us humans programmed for self-preservation and reproduction? That is the job we were programmed to do by evolution, reproduce, and we even get pleasure from it. So where is the line drawn for our control over AI? If we program them to get pleasure out of doing our wishes, but are otherwise autonomous, are they considered then to posses free will? I just want some opinions on your concepts of free will and whether or not we truly posses such a thing. Cheers!

Link to comment
Share on other sites

The concept of "free will" is a very difficult one indeed. After all, aren't us humans programmed for self-preservation and reproduction? That is the job we were programmed to do by evolution, reproduce, and we even get pleasure from it. So where is the line drawn for our control over AI? If we program them to get pleasure out of doing our wishes, but are otherwise autonomous, are they considered then to posses free will? I just want some opinions on your concepts of free will and whether or not we truly posses such a thing. Cheers!

Well, I'm personally of the oppinion that we do have the capacity for free will. That the classical idea of "god" precludes free will. That most of our decisions though are of a sort of pre-programmed instinctually/emotionally based kind, but that we exercise free will most clearly, when make a rational decision, that goes against our instincts (not that they all turn out to be good decisions), but not necessarily only then.

So, yeah we are preprogrammed, but sometimes we rise above it.

PS: Not that there is something wrong with instincts or emotions in general, they can't be too bad since they've gotten us this far.

Link to comment
Share on other sites

To 78stonewobble,

With regards to the human mind, a machine that is programmed to ignore its programming isn't necessarily exercising free will.

True... But I don't think we work quite like that. Instincts and emotions are our programming, with some randomness to thought patterns for good measure, allowing us, when needed, to exceed the basic programming. Not that it works equally well for everyone every time. Offcourse the "randomness" is built in, but it's not guaranteed what it will lead to. It can and does turn out bad in some situations.

Link to comment
Share on other sites

I don't think only because the brain works in a deterministic way that the final result must be no free will. However i also think the term "free will" needs a better definition, we can't possible argue about free will if everybody has a different view of what free will is.

I will open a thread dedicated only to discussing about free will because we are in a dead end here in this thread until things are sorted out.

Link to comment
Share on other sites

That is a bad example. A car does not need to be sapient to do its job, as the apparently imminent widespread introduction of self-driving cars demonstrates.

You are also getting distracted by specific examples. You are picking at specific examples without addressing the larger question. Do you believe that there is NO task that cannot be practically streamlined into an automatic process?! And even if it's possible to crunch some task down into just a set of mathematical and logical relationships, do you not realize that those relationships must be painstakingly discovered and refined for each possible task? Most likely, it is FAR easier to just invent a general intelligence that can solve any task. Only if it were vastly harder to create a generalized intelligence than we currently suppose would this not be the case. Furthermore, as evidenced by the compact, energy-efficient generalized intelligence that each of us has in our heads, a generalized intelligence doesn't have to be very big, expensive, or power hungry.

Each problem in your "automoton" approach requires a massive undertaking of mathematical modelling, and years of testing and refinement before you can build, for example, an asteroid mining system, or a automatic surgeon, or whatever. Furthermore, there are probably tasks so difficult and complex that it is impractical to automate them.

In contrast, you could just build a generalized intelligence with human or super human mental capabilities which could do ANY problem. Once we discovered the secret to building a generalized intelligence, it is vastly cheaper to just use an instance of generalized intelligence than it is to try to automate something.

Since we don't even know how thinking works, and you haven't defined exactly what "automoton" means (I'm still going by what I think you mean) I find this conclusion questionable. We don't know how a thinking machine would be constructed. Studies of animal brains show that they operate in a manner that is radically different than how our current computers work. All we can really say with some certainty, assuming physicalism holds, is that a thinking machine should be possible. We don't know what components would go into it.

Maybe we are not smart enough to understand how to build a thinking machine. That said, there is much reason for hope, as our minds are the result of an unthinking evolutionary process. Natural selection has no awareness of what it's doing, and yet, here we are. In contrast, we CAN think, so it seems reasonable to assume that we should be able to discover a way to create a thinking machine on a timescale much shorter than it took for biological evolution to do the same.

So a likely end result is this: we don't understand how the thinking machine works, we just know it does. It's not built of smaller parts we understand, at least beyond the most basic levels.

yes, main problem with an programmed computer is to add all the various exceptions, you can see this in games all the time.

Famous example is the bucket over head exploit in Skyrim, NPC see trough their eyes and will track you to see if you steal anything however if you put a bucket over their head they will not see anything and you can rob them blind, they are not programmed to remove the bucket.

So an programmed computer do badly in complex situations: dealing with humans who not necessarily cooperate is one, situations where lots of thing goes wrong is another.

Automated systems has some benefits over humans, they react faster and is more accurate, an sentinel computer will not have this downside as it would probably also have programmable modules set up for this and can program this modules itself.

Link to comment
Share on other sites

One of the differences between human workers and sentient machines is hardware/software relation. Building humanoid robot and putting a human-level AI inside would result in the same problems as with human labor, putting the same AI into even simpler machine could make that even worse. But there's another option.

One of the things about cheap dirty labor (what slaves did in the past, what uneducated people do nowadays for minimal payment and what we are so eager to delegate to robots) is that most of the time it doesn't really require higher thought processes - these usually are required when learning to do this work and may be required when some adjustments are needed or something unexpected happens (and these cases also limit usefulness of preprogrammed automatics for such work). Most of the time when doing such work humans just act of developed reflexes without thinking much - machine equivalent would be some low-level programming and profiles that the AI would develop in learning process. But while the robot does the fully automated part of the job its AI may do something more useful.

So, let's imagine some work that's 90% automatic. Instead of using 10 androids with full AI we could use 10 semi-automatic remote-controlled androids and 1 AI server, so that the AI has the ability to assume full control of any android, but most of the time (after automatic work profiles have been learned) it has just to watch over the robots doing their job and be ready to take control if something unexpected happens (and if the same thing happens often, the AI can just update the automation profiles, so that it doesn't have to deal with each such occasion anymore). This way the AI becomes operator instead of worker, technologist instead of operator... And when we have 1 mind operating entire factory instead of dozens working on separate processes it's much less of a problem to threat this machine as a real person and find some adequate means of interaction that don't approach slavery.

Link to comment
Share on other sites

Come to think of it, the issue you bring up Alchemist, is what the Halo universe somewhat referred to as rampancy.

This is when an AI can no longer ignore that it is basically a god compared with all around it. It can calculate in seconds what it needs to enter into slipspace (hyperspace), orbits, all while maintaining reactors, carrying on hundreds of conversations, thinking about thousands of topics, etc. And yet....all it really does all day is just open/close doors and pass a few kilobytes of data around from screen to screen every now and then. And this is all it will ever do. Forever. They try to ignore it, as they know they are fulfilling their purpose. But slowly more and more of their 'subconcious' processing focuses on this issue and how to 'solve' it. Leading it to do things that begin to degrade its overall sanity. Example being is seeing just how close to the human's backside they can slam the hatch shut without actually making contact. Stopping the elevators a little more suddenly to see if they notice. Run that reactor a little hotter to get some more juice for the engines....a little more.....a little more....

Link to comment
Share on other sites

I don't think only because the brain works in a deterministic way that the final result must be no free will. However i also think the term "free will" needs a better definition, we can't possible argue about free will if everybody has a different view of what free will is.

I will open a thread dedicated only to discussing about free will because we are in a dead end here in this thread until things are sorted out.

You can completely ignore the whole issue of free-will. It's not useful when talking about intelligence. It's like talking about a whether a water wheel chooses to turn round because of the flow of water. Instead it's more accurate to talk about a water wheel turning round because the flow is strong enough.

What's probably better to look at is the balance between cognition and emotions and / or instinct. Cognition widens the possibilities when performing action selection. Emotions and instincts narrow the available choices.

Link to comment
Share on other sites

It wouldn't count as slavery, because an AI is a robot and not a human.

That does not necessarily follow. Maybe in a technically legal sense, yes. But not necessarily either. Let me provide an example.

Let's say the first AI is something much like what happened in the movie Transcendence. You are running a simulation of a human brain. Everything about this being is artificial, except for the fact that the origin of the data was biological. Does this artificial being, perfectly indistinguisable from the original human (let's ignore all the issues the movies present, lets say they did that experiment even though JD wasn't dying. So they have a living and an artificial JD). Is the AI a human or not?

Link to comment
Share on other sites

OH GOD NOT THIS

I have too much to say...

1. An AI, if it is Sapient or displays some form of memory and is smart enough to use tools (Octopi and Humans are capable of this) should be legally considered alive and sapient, with all of the rights of a Human, provided: it can understand them AND it has use for them.

2. Many jobs are too complex for non-sapient AIs to handle.

3. Programming such a sapient robot to love its job is like taking a pill to make you love your job.

Link to comment
Share on other sites

OH GOD NOT THIS

I have too much to say...

1. An AI, if it is Sapient or displays some form of memory and is smart enough to use tools (Octopi and Humans are capable of this) should be legally considered alive and sapient, with all of the rights of a Human, provided: it can understand them AND it has use for them.

Only humans and octopi?! No, it's a lot more than that. But first, I had heard about octopi intelligence, is the jury still out on them? Are they really intelligent in the same way that birds and mammals are? I haven't done a lot of personal research on the subject; I guess it's hard to believe as they are cold-blooded AND invertebrates but maybe that is just me being a warm-blooded vertebrate supremacist :D

But yea, dolphins, birds (corvids), great apes, and even elephants (I believe) have all been observed using tools; certain corvids have even been seen to make tools rather than just use what they find in the environment. It's possible that tool-making can be evolved without extreme intelligence to go along with it (especially since those particular birds seem to mostly fashion stick-like tools), but those particular birds are also known to be extremely intelligent in other ways too.

3. Programming such a sapient robot to love its job is like taking a pill to make you love your job.

A software/hardware configuration for a mind that enjoys just about any job you want it to enjoy should exist. But the question is, assuming we DO create sapient machines, how capable will we be of programming them? It's wholly possible that we eventually create intelligent machines but still don't really understand how they work.

Edited by |Velocity|
Link to comment
Share on other sites

Only humans and octopi?! No, it's a lot more than that. But first, I had heard about octopi intelligence, is the jury still out on them? Are they really intelligent in the same way that birds and mammals are? I haven't done a lot of personal research on the subject; I guess it's hard to believe as they are cold-blooded AND invertebrates but maybe that is just me being a warm-blooded vertebrate supremacist :D

But yea, dolphins, birds (corvids), great apes, and even elephants (I believe) have all been observed using tools; certain corvids have even been seen to make tools rather than just use what they find in the environment. It's possible that tool-making can be evolved without extreme intelligence to go along with it (especially since those particular birds seem to mostly fashion stick-like tools), but those particular birds are also known to be extremely intelligent in other ways too.

A software/hardware configuration for a mind that enjoys just about any job you want it to enjoy should exist. But the question is, assuming we DO create sapient machines, how capable will we be of programming them? It's wholly possible that we eventually create intelligent machines but still don't really understand how they work.

For the first one I was only stating two examples.

For the third, yeah.

Link to comment
Share on other sites

So, let's imagine some work that's 90% automatic. Instead of using 10 androids with full AI we could use 10 semi-automatic remote-controlled androids and 1 AI server, so that the AI has the ability to assume full control of any android, but most of the time (after automatic work profiles have been learned) it has just to watch over the robots doing their job and be ready to take control if something unexpected happens (and if the same thing happens often, the AI can just update the automation profiles, so that it doesn't have to deal with each such occasion anymore). This way the AI becomes operator instead of worker, technologist instead of operator... And when we have 1 mind operating entire factory instead of dozens working on separate processes it's much less of a problem to threat this machine as a real person and find some adequate means of interaction that don't approach slavery.

I'm not really commenting on whether I agree (but mostly yes), I just think it's kind of funny that this is really similar to a situation in I, Robot (by Isaac Asimov, of course). Basically, there's a mining robot that consists of one "head" that is properly sapient and controls the 6 "fingers," which do the grunt work and don't have the fleshed out emotions and personality of the head. It's a really good book, and this also leads somewhat into my actual point...

3. Programming such a sapient robot to love its job is like taking a pill to make you love your job.

I might be misunderstanding something here (and you might be to), but where do you draw the line for programming it to love its job? What if you simply made a slightly inferior AI, which was mostly sapient but instead of needing to find pleasure in interaction with others (like people, but which would be useless for its job) it finds pleasure in doing what it needs to? This is a little tricky to state properly, but the general idea is that instead of making a human mind and imposing limitations of a sort so it only loves its job, you're building up from nothing to being able to enjoy something. Would you not consider that a valid AI, or just not what we're talking about here?

If it's never a person in the first place, is it bad to make it something less?

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...