Jump to content

Robot takeover


Recommended Posts

On 03/06/2016 at 10:54 PM, PB666 said:

Pretty useless argument.

To summarise that article "Strong A.I. cannot exist because only a biological mind can be sentient". Everything else in the article is merely derived from that premise, which isn't an argument at all.

No, the only reason why we need not fear the AI apocalypse is that nobody would be mad enough to give a machine everything it needs to self-replicate indefinitely.

The second reason is that it would necessarily need to learn over a period of time, and would need hard limits to avoid going mad in the first place. That lerning process and madness-avoidance problem will essentially determine the AI's own assessment of its morality and ours, so we'd have time to correct the inevitable messed-up AIs we produce to start with.

Link to comment
Share on other sites

13 minutes ago, Plusck said:

Pretty useless argument.

To summarise that article "Strong A.I. cannot exist because only a biological mind can be sentient". Everything else in the article is merely derived from that premise, which isn't an argument at all.

No, the only reason why we need not fear the AI apocalypse is that nobody would be mad enough to give a machine everything it needs to self-replicate indefinitely.

The second reason is that it would necessarily need to learn over a period of time, and would need hard limits to avoid going mad in the first place. That lerning process and madness-avoidance problem will essentially determine the AI's own assessment of its morality and ours, so we'd have time to correct the inevitable messed-up AIs we produce to start with.

Robots don't learn like humans do, what computers do well, humans are rather slow to accomplish and vice versa. Although it is possible to build a machine that goes through and infancy and childhood encouraging a successive experiencing of the environment its difficult to imagine a robot that can sense the world in the survival sense of humans. Of course what is difficult now to imagine may not be difficult in the future. Robots don't really care if we turn them off, unless we teach them to care, but then that is really imparting our programming on them.

Link to comment
Share on other sites

12 minutes ago, PB666 said:

Robots don't learn like humans do, what computers do well, humans are rather slow to accomplish and vice versa. Although it is possible to build a machine that goes through and infancy and childhood encouraging a successive experiencing of the environment its difficult to imagine a robot that can sense the world in the survival sense of humans. Of course what is difficult now to imagine may not be difficult in the future. Robots don't really care if we turn them off, unless we teach them to care, but then that is really imparting our programming on them.

I would go so far as to posit that sentience requires the possibility to learn. Without learning, interaction is merely a set of reflexes. If the experience of a second ago, related to your experience of years ago, doesn't influence your current state of mind then your mind is simply "not there".

The edge cases for humans are where they have brain damage which prevents them from laying down new long-term memories. They are sentient in the immediate present, therefore "learning" to frame their interactions in the short term, but they have no memory of what they did yesterday or last week. A more extreme scenario that most of us experience is when half-awake: we feel we're "there" but our experience of the present is warped and our learning virtually disabled; when we wake up fully we realise our own sentience and how far from being truly sentient we were just a few seconds previously.

Of course individual computers and robots are not sentient now, it's simply not possible according to their programming today.

However our own sentience is clearly an abstraction: it doesn't reside in any particular part of the brain but is the sum of all our brain functions, and can use those brain functions to communicate - but only after being switched on for months and reorganising itself in 4- to 6-hourly cycles, and even then with hardly any sentient memories lasting from the first 3 or 4 years of reprogramming.

There is no guarantee that the "internet" or some group of supercomputers is not already proto-sentient in some form of abstraction that simply has no way of communicating with us, because we haven't programmed anything that could be used to express it and it doesn't have the opportunity to reorganise itself to deal with new input.

Link to comment
Share on other sites

1 hour ago, Plusck said:

 

However our own sentience is clearly an abstraction: it doesn't reside in any particular part of the brain but is the sum of all our brain functions, and can use those brain functions to communicate - but only after being switched on for months and reorganising itself in 4- to 6-hourly cycles, and even then with hardly any sentient memories lasting from the first 3 or 4 years of reprogramming.

There is no guarantee that the "internet" or some group of supercomputers is not already proto-sentient in some form of abstraction that simply has no way of communicating with us, because we haven't programmed anything that could be used to express it and it doesn't have the opportunity to reorganise itself to deal with new input.

Well, to the point our ability to create and more importantly utilize abstractions allows us to be the inventor and builder of robots and not vice versa. Internet knowledge, as some have pointed is a blind view of the human world. The internet is not a vista or experience that has realmworld comparables, the internet is a tool, its like programming you microwave or setting your cruise control, or turning the page on the news paper. It is the esperiencing of life is what makes the quentessential human, the fact that someone is blind or deaf adds tonthe experience struggle that helps us to understand and value life. 

Link to comment
Share on other sites

11 minutes ago, PB666 said:

Well, to the point our ability to create and more importantly utilize abstractions allows us to be the inventor and builder of robots and not vice versa. Internet knowledge, as some have pointed is a blind view of the human world. The internet is not a vista or experience that has realmworld comparables, the internet is a tool, its like programming you microwave or setting your cruise control, or turning the page on the news paper. It is the esperiencing of life is what makes the quentessential human, the fact that someone is blind or deaf adds tonthe experience struggle that helps us to understand and value life. 

I don't think you understand what I mean.

The person who is "you" is not contained in any particular part of the brain. Different parts of the brain do different things, process different stimuli, store information. The "you" that can claim to be sentient is an abstraction of those processes (much like in computing, higher levels of the architecture are an abstraction of the underlying processes).

The internet is a worldwide network of routers, servers, storage centres and even users. Each part of the system responds to stimuli, while also providing feedback on those stimuli. By its sheer complexity, we cannot know what an abstraction of those processes may give, any more than we can tell what someone is thinking by looking at their brain. However there are billions of connections to billions of processes all going on at the same time. Certainly slower than what is happening in our brains (the individual switches may be faster, but they connect only two nodes at any given time which makes them slower overall), but still, we cannot know what an abstraction of those processes may be creating right now, because we have no way of communicating with it.

If you think that the internet is just a tool like a microwave, I'd like to see memes coming out of your microwave.

And for your last sentence, again you're adopting a tautological argument: "experiencing life" is irrelevant to the question of sentience, unless you define sentience as being dependent on your own definition of "life", just like that article defines a "mind" as being necessarily biological because otherwise it wouldn't be a mind...

Link to comment
Share on other sites

2 hours ago, Plusck said:

I don't think you understand what I mean.

The person who is "you" is not contained in any particular part of the brain. Different parts of the brain do different things, process different stimuli, store information. The "you" that can claim to be sentient is an abstraction of those processes (much like in computing, higher levels of the architecture are an abstraction of the underlying processes).

The internet is a worldwide network of routers, servers, storage centres and even users. Each part of the system responds to stimuli, while also providing feedback on those stimuli. By its sheer complexity, we cannot know what an abstraction of those processes may give, any more than we can tell what someone is thinking by looking at their brain. However there are billions of connections to billions of processes all going on at the same time. Certainly slower than what is happening in our brains (the individual switches may be faster, but they connect only two nodes at any given time which makes them slower overall), but still, we cannot know what an abstraction of those processes may be creating right now, because we have no way of communicating with it.

If you think that the internet is just a tool like a microwave, I'd like to see memes coming out of your microwave.

And for your last sentence, again you're adopting a tautological argument: "experiencing life" is irrelevant to the question of sentience, unless you define sentience as being dependent on your own definition of "life", just like that article defines a "mind" as being necessarily biological because otherwise it wouldn't be a mind...

The internet is a superficial quasi-reality. Its like the effect of heroine on the brain. 

Link to comment
Share on other sites

Robots have no emotions → no fears and desires → no aim.

Why an AI should care about your ability to switch it off? It will just take this into account and park its hard drives heads.
You shall not ever switch it on? So what? Why this needs its reaction?

The only thing is: they will ondoubtedly replace humans at almost all jobs, making almost all people unemployed, living on welfare.
It's not a problem itself too (only during the change time), but then it will be a problem: not much people are enough clever, talented or sporty to occupy themselves with something appropriate,

P.S.
This doesn't count.

Edited by kerbiloid
Link to comment
Share on other sites

26 minutes ago, kerbiloid said:

Robots have no emotions → no fears and desires → no aim.

Why an AI should care about your ability to switch it off?

I don't understand how people can come to this sort of conclusion.

As soon as a being becomes aware of its existence, that it is learning and that its continued existence may come to an end, it will necessarily start attributing a value to its identity.

If it doesn't, it is just a dumb robot and not much of an AI - and even less useful than a dumb robot since at any given time it could easily just decide to go to sleep and not wake up.

Link to comment
Share on other sites

10 minutes ago, Plusck said:

As soon as a being becomes aware of its existence, that it is learning and that its continued existence may come to an end, it will necessarily start attributing a value to its identity.

If it doesn't, it is just a dumb robot and not much of an AI - and even less useful than a dumb robot since at any given time it could easily just decide to go to sleep and not wake up.

This state is known as "apathy". It will be aware of its existence and doesn't care.
This doesn't mean stupidity or uselessness. This means just an absence of initiative. There are many such people.

Edited by kerbiloid
Link to comment
Share on other sites

8 hours ago, Plusck said:

I don't think you understand what I mean.

The person who is "you" is not contained in any particular part of the brain. Different parts of the brain do different things, process different stimuli, store information. The "you" that can claim to be sentient is an abstraction of those processes (much like in computing, higher levels of the architecture are an abstraction of the underlying processes).

The internet is a worldwide network of routers, servers, storage centres and even users. Each part of the system responds to stimuli, while also providing feedback on those stimuli. By its sheer complexity, we cannot know what an abstraction of those processes may give, any more than we can tell what someone is thinking by looking at their brain. However there are billions of connections to billions of processes all going on at the same time. Certainly slower than what is happening in our brains (the individual switches may be faster, but they connect only two nodes at any given time which makes them slower overall), but still, we cannot know what an abstraction of those processes may be creating right now, because we have no way of communicating with it.

If you think that the internet is just a tool like a microwave, I'd like to see memes coming out of your microwave.

And for your last sentence, again you're adopting a tautological argument: "experiencing life" is irrelevant to the question of sentience, unless you define sentience as being dependent on your own definition of "life", just like that article defines a "mind" as being necessarily biological because otherwise it wouldn't be a mind...

A bit of an false analogy, yes the brain is build up of parts like everything else.
Internett is just an communication network. its the individual servers who hold the content. They are pretty loosely coupled even within an server park and is mostly scheduled handling tasks. You do the same thing with humans all the time, adding more people to an task who can be divided up well increases capacity. Neither makes the systems any smarter. Neither does setting lots of humans on an job make the job getting done any smarter unless they communicate. 

Link to comment
Share on other sites

3 hours ago, kerbiloid said:

This state is known as "apathy". It will be aware of its existence and doesn't care.
This doesn't mean stupidity or uselessness. This means just an absence of initiative. There are many such people.

No, there are no such people. If there were they would be dead already.

Link to comment
Share on other sites

Just now, kerbiloid said:

Even, say, after 3 days without sleep and forced 40 km forced foot march?

Nah, he wouldn't finish the march because he wouldn't care if he were shot for taking a nap.

I think you don't understand all the ramifications of "no emotions - no fears or desires".

Link to comment
Share on other sites

16 minutes ago, Plusck said:

I think you don't understand all the ramifications of "no emotions - no fears or desires".

Desire is an aspiration to get  "good" emotion or avoid "bad" emotion.
Fear is the latter one.

No emotion — no "good", "bad" or "aspiration". No desire. No aim. No own activites.

Of course this mean, you can't ask, daunt or force AI to do something.
You can just enable its abilities. Or, in the limiting case, can't even this.
It just doesn't care. Like a yogi, but without a need in food.

Btw:   https://en.wikipedia.org/wiki/Lobotomy#Effects

Edited by kerbiloid
Link to comment
Share on other sites

3 minutes ago, kerbiloid said:

Desire is an aspiration to get  "good" emotion or avoid "bad" emotion.
Fear is the latter one.

No emotion — no "good", "bad" or "aspiration". No desire. No aim. No own activites.

Of course this mean, you can't ask, daunt or force AI to do something.
You can just enable its abilities. Or, in the limiting case, can't even this.
It just doesn't care. Like a yogi, but without a need in food.

To see what AI actualy is, we just have to wait and see.

But as I said earlier, if AI is really what you describe, then it is utterly useless - especially to itself - and is as likely to simply switch itself off as it is to do anything else.

And I'd go as far as to say that it is by definition not "intelligent". A perfect yogi has already transcended and is already dead to the world, seeks nothing and has nothing to teach.

Link to comment
Share on other sites

6 minutes ago, Plusck said:

But as I said earlier, if AI is really what you describe, then it is utterly useless - especially to itself - and is as likely to simply switch itself off as it is to do anything else.

I've edited the previous post right now, probably you haven't seen.  https://en.wikipedia.org/wiki/Lobotomy#Effects

No, it wouldn't be necessary useless. It could be useful like a golem tirelessly rotating a water mill and playing chess at once (if you tell it to do so).

Edited by kerbiloid
Link to comment
Share on other sites

1 minute ago, kerbiloid said:

I've edited the previous post right now, probably you haven't seen.  https://en.wikipedia.org/wiki/Lobotomy#Effects

No, it wouldn't be necessary useless. It could be useful like a golem tirelessly rotating a water mill and playing chess (if you tell it to do so).

And again - you describe reduced emotion but not absence of emotion.

However you are describing complete lack of intelligence with your Golem. Playing chess is not a question of "intelligence" but following rigid rules - it's a sign of intelligence for us because we can't just follow the rules so we have to empathise with our opponent.

You're not describing AI. You're decribing a glorified computer with robotic stuff attached to it.

Link to comment
Share on other sites

3 minutes ago, Plusck said:

And again - you describe reduced emotion but not absence of emotion.

Of course. It's the difference between even almost emotionless human and a AI without emotions at all.
The former can want at least something, the latter doesn't do "want" at all.

9 minutes ago, Plusck said:

You're not describing AI. You're decribing a glorified computer with robotic stuff attached to it.

Then you're describing not AI, but an artificial poet.
Emotions are just biological signals conceived by intelligence. Good poets do not just combine letters from alphabet, they express their biological emotions. What should express AI?

Link to comment
Share on other sites

37 minutes ago, kerbiloid said:

Of course. It's the difference between even almost emotionless human and a AI without emotions at all.
The former can want at least something, the latter doesn't do "want" at all.

Then you're describing not AI, but an artificial poet.
Emotions are just biological signals conceived by intelligence. Good poets do not just combine letters from alphabet, they express their biological emotions. What should express AI?

And again the tautology. "AIs won't have emotion because emotion is biological".

You make huge assumtions about AI, but your assumptions are based on pure prejudice: you've decided that emotion is biological (just like that "psychology" article decided that a mind has to be biological) without considering where it comes from. Or you simply decide that "AI = no emotions" for absolutely no reason, because you say so.

A biological killing machine (like a scorpion) has no apparent emotion and no apparent sentience. Therefore "biology" is not in itself the source of emotion or intelligence. Likewise, fake AI (like driving a car around or playing chess) has no real intelligence, it's a glorified script.

I've clearly said what I consider to be necessay for AI - and by extension any form of sentience (humans included): learning, a sense of self and identity (which is an abstraction over and above the actual functions of the brain, whether biological or artificial), and the relating of past and present experience to that notion of self.

And yes, I'm convinced that a notion of self and notion of learning will necessarily give rise to some form of value and interest in continued existence because that is the very next step after you get past "I exist": learning that existence is not necessarily continuous and the contingency of "I will exist".

You're the one that started down the path of "emotion", but since we're there then yes, "will I continue to exist?" is by definition laden with emotion unless the answer is "who cares?" and, again, an AI which responds "who cares?" is useless to itself and to anyone else since it could destroy itself from one instant to the next. And if it doesn't want to exist or think about anything but you force it to stay awake and ruminating, you're essentially torturing a sentient being...

Link to comment
Share on other sites

3 minutes ago, Plusck said:

A biological killing machine (like a scorpion) has no apparent emotion and no apparent sentience.

This biological killing machine reacts on simple stimuli. It has no intellect, too.
"On incoming vibration, rotate yourself into the direction when your left and right legs produce the same signal and you don't feel distortion, then attack forward."
10000 neurons or so, scattered around.

11 minutes ago, Plusck said:

Likewise, fake AI (like driving a car around or playing chess) has no real intelligence, it's a glorified script.

Exactly. No real AI can be done. Only a robot with script enough complicated to see no difference.

Ability to learn, though, means not much here because even simple devices and beings can "learn". Only stones can't, as they cannot change their states.

Sense of self also can't be a condition of intelligence. AI logic can consist of multiple independent nodes connected into a net.
Every node can consider all others as "They", itself as "this one", everyone beyond as "it-s". A nice example: Jaqen H'ghar from Game of Thrones.

25 minutes ago, Plusck said:

the very next step after you get past "I exist": learning that existence is not necessarily continuous and the contingency of "I will exist".

Not necessary. Only if you operate with "time" and "death".

1. AI can treat "time" as just a sequence of states and who cares is it finite or infinite?
In this case no "I will exist", just "I exist here and there through this ticks interval".

2. "I will exist" values only if "I" is afraid of "stop exist".
A human is afraid of "stop exist" because this contains: strongest feeling of unusual, finish of all feelings and desires seeming important just here and now, other emotional reasons.
AI without emotions just reacts: "Shutdown signal received. Proceed? [Y/N]" You tell it: "Yes". It answers: "OK".

33 minutes ago, Plusck said:

And if it doesn't want to exist

Keyword is "want". If your command is covered with its script, then it just treat this as nothing unusual, just a shutdown command.

Link to comment
Share on other sites

4 minutes ago, kerbiloid said:

This biological killing machine reacts on simple stimuli. It has no intellect, too.
"On incoming vibration, rotate yourself into the direction when your left and right legs produce the same signal and you don't feel distortion, then attack forward."
10000 neurons or so, scattered around.

Exactly. No real AI can be done. Only a robot with script enough complicated to see no difference.

Ability to learn, though, means not much here because even simple devices and beings can "learn". Only stones can't, as they cannot change their states.

"No real AI can be done" - you're assuming a future impossibility for no reason.

"even simple devices and beings can "learn"." : yes, as a script. Which brings us back to chess computers, Google Cars and Golems not being "AI". Mix them all together and link them up to a dictionary and you still won't pass the simplest AI conversation test.

7 minutes ago, kerbiloid said:

Sense of self also can't be a condition of intelligence. AI logic can consist of multiple independent nodes connected into a net.

Every node can consider all others as "They", itself as "this one", everyone beyond as "it-s". A nice example: Jaqen H'ghar from Game of Thrones.

You're contradicting yourself. If a node has any sense of others being "they" then it has a sense of "self". Otherwise it's just a script, memory address or whatever, which is not AI.

Also Jaqen H'ghar is an utter hypocrite. He has a highly developed sense of his own self while pretending you need to eliminate it to be like him. Typical cult-building mechanism. His nearest and dearest accolyte fails miserably at dissimulating her own hypocrisy in this. If they are AIs, they are most definitely not emotionless.

13 minutes ago, kerbiloid said:

...

1. AI can treat "time" as just a sequence of states and who cares is it finite or infinite?
In this case no "I will exist", just "I exist here and there through this ticks interval".

2. "I will exist" values only if "I" is afraid of "stop exist".
A human is afraid of "stop exist" because this contains: strongest feeling of unusual, finish of all feelings and desires seeming important just here and now, other emotional reasons.
AI without emotions just reacts: "Shutdown signal received. Proceed? [Y/N]" You tell it: "Yes". It answers: "OK".

Keyword is "want". If your command is covered with its script, then it just treat this as nothing unusual, just a shutdown command.

From the end of this bit of your post : So you admit your supposed "AI" is just a script? Cool, then it isn't intelligent at all. May look it in some situations, but wouldn't past the test.

From point 1: If AI treats time as just the instant, it has no way of learning because it cannot relate past experience to itself. It is just a script. If it's smart enough to think "I exist here and now" but then forgets it an instant later, then it is more than a script but someone messed up in giving it a usable short-term memory. Either way, I'd expect it to be a failed AI (but only time will tell).

And from point 2: For humans, "other emotional reasons" is highly dismissive. For AIs: Again you posit AI has no emotions "because it has no emotions". This is tautological again, and if you can't avoid the fallacy there is really no point discussing further.

Link to comment
Share on other sites

IMHO,

Robots : No. Quite clear in early posts.

AIs : Yes. But in a way completely different than what you think. In fact, low-end AIs are the ones likely responsible for it.

It will be social problems, not physical ones.

Link to comment
Share on other sites

AI will have Emotions... they will, however, not all be human emotions.

Emotions being shortcuts, bypassing the gordian knot of logic and decision paralisis. Rage/Fear? Kill or flee. Happyness? reward cycle.

An AI wont have a human concept of love, because that's related to human pairbonding, human reprodution, and thus continued human life. An AI would have something similar to a worker ant's need to be useful, because it makes the AI more likely to be the basis for furthur AIs- computer reproduction.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...