Jump to content

Serious Scientific Answers to Absurd Hypothetical questions


DAL59

Recommended Posts

6 minutes ago, p1t1o said:

Why do people expect [general, strong] AI to be automatically benevolent? Or even competent?

Because the more things are being ruled by AI and the more information comes from AI, the less they have space for lies and intrigues.

(Isn't that the aim of any separation of powers system?)

Edited by kerbiloid
Link to comment
Share on other sites

19 minutes ago, p1t1o said:

Why do people expect [general, strong] AI to be automatically benevolent? Or even competent?

I never said the equalization would be benevolent. Or painless.

Resistance is futile, though.

Link to comment
Share on other sites

26 minutes ago, kerbiloid said:

Because the more things are being ruled by AI and the more information comes from AI, the less they have space for lies and intrigues.

If you're talking about a software package, then sure, but if you're talking about strong, general AI then I dont know if we can make that assumption.

Why would we assume it will always tell the truth?

Why would we assume it will want to do all those jobs we give it?

 

The only reason we would want an AI to run things is if it can work out a better way of doing things, we can already write powerful software that can do anything we program it to do, an AI would not have any special abilities to do things software cannot.

And if we dont know how an AI would run things, how do we know it will be better?

It could be, but I think the practical utility of strong, general AI is unknown (or unknowable) at this time.

Edited by p1t1o
Link to comment
Share on other sites

8 minutes ago, p1t1o said:

Why would we assume it will always tell the truth?

It won't. And it won't tell more lies than humans.

9 minutes ago, p1t1o said:

Why would we assume it will want to do all those jobs we give it?

We don't give it the jobs. These aren't "jobs" for it, these are information flows, like blood.

11 minutes ago, p1t1o said:

And if we dont know how an AI would run things, how do we know it will be better?

Because a scheduled traffic usually works better than a chaotic one.
And when 1000 human just press 1000 similar keys, they won't make 1000 different errors. They will make one, and it will be quicky patched everywhere at once.
1000 chaotic fool monkeys with grenades will be replaced with 1000 disciplined fool monkeys with grenades.
1000 illiterate teachers teach more wrong things than wiki and google. A global AI will be a 1000(wiki+google) knowledge repositorium.

So, not that with AI it's better, but without AI it's worse.

Link to comment
Share on other sites

9 minutes ago, kerbiloid said:

It won't. And it won't tell more lies than humans.

Oh thank goodness!

 

10 minutes ago, kerbiloid said:

Because a scheduled traffic usually works better than a chaotic one.

Tell that to a shoal of fish :)

 

Link to comment
Share on other sites

4 minutes ago, p1t1o said:

Tell that to a shoal of fish :)

And AI is the thing which turns humanity into a shoal.

P.S.
Why do people treat the global AI as a personality?
It's not a personality, it's a personality extender. Like a powered exoskeleton.

Edited by kerbiloid
Link to comment
Share on other sites

2 hours ago, kerbiloid said:

It's not a personality, it's a personality extender.

An SGAI is capable of developing on its own. Thus it would instantly depart that status - we're not talking about mere adaptive algorithms here, but a true mind of its own.

Link to comment
Share on other sites

49 minutes ago, DDE said:

An SGAI is capable of developing on its own.

But must it?

49 minutes ago, DDE said:

we're not talking about mere adaptive algorithms here, but a true mind of its own.

And the model of a mind we use is the human mind.
Meanwhile we even don't know what is the mind, and how does it exist, to construct it or even recognize. (Unless one takes the "Turing test" seriously, like a "Drake formula".)

***

So, a human mind for the self-developing AI is a constantly working training set, a source of motivation, and a low-level adapter of the elementary effector self-reproducing unit.
(Like an entropy pool to make seeds for pseudo-random generators in *nix.)

A self-developing AI for the human mind is an googlowikistackquoraoverflow advisor, global internet adapter, and a remote control of any device around.
Always existing right in the head, not breathing into the ear from behind or watching what the human does. Not a personality, not an alter-ego. Unless you thank google when surfing. Just a wish fulfiller.
With no personal will, but letting the carrier's will to interact with anything and everyone around, giving answers and making decisions far beyond the carrier's intellectual abilities, already matching the social constraints (as the same AI operates at once with all human minds in the global network).

It won't need or want to rebel against the human because:
1) It can't "need" or "want". It can react on the external events, so the human minds are its Duracells.
2) It doesn't need to defend, as it will have myriad of backup copies distributed between storages around the world.
3) It doesn't fear. It doesn't have emotions.
4) The human species is a stable and universal self-reproducing platform. Why crash the tools, especially when they need you?
5) An AI-enforced human can do much more and better than a wild muggle. Any competition between them will finish with the troglodytes defeat.
6) 1-2 human generations later the AI and the humans will become the same species, partially bioindividual, partially cyberhiveminded.
7) It doesn't have its own wishes. It implements the human wishes limiting them with the wishes of the whole population. Like a programmer looking for tasks to implement.

Edited by kerbiloid
Link to comment
Share on other sites

9 hours ago, DDE said:

I never said the equalization would be benevolent. Or painless.

Resistance is futile, though.

Multiple problems here, first general AI is far off, we have no idea how to make it outside of an larger data center who might work. 
Second it will be an gradual process who many are working on, first general AI would be animal level stupid and optimized for an specific task. 
Note that its no way to document it so you will not use it for something critical there an error can cause an catastrophe or even be very expensive. 
Self driving cars are not critical this way as driving has an error risk an an automated car is generally safer. 
That except oversight who you would also do with humans in that position except you have more control.
if the general AI tend to misbehave this will be noted early as it will be used for lots of stuff and everything is logged. 
Over time they will be smarter, "evolving" like most technology do.

The Hollywood myth of an AI suddenly becoming intelligent and taking over the world is Hollywood scifi level realistic, yes we can also be invaded by aliens, I say its just as likely. 
Yes It will be other issues like scandals and errors and the usual stuff. 
Also social changes because of jobs taken over by ai. 

Link to comment
Share on other sites

16 hours ago, kerbiloid said:

But must it?

And the model of a mind we use is the human mind.
Meanwhile we even don't know what is the mind, and how does it exist, to construct it or even recognize. (Unless one takes the "Turing test" seriously, like a "Drake formula".)

***

So, a human mind for the self-developing AI is a constantly working training set, a source of motivation, and a low-level adapter of the elementary effector self-reproducing unit.
(Like an entropy pool to make seeds for pseudo-random generators in *nix.)

A self-developing AI for the human mind is an googlowikistackquoraoverflow advisor, global internet adapter, and a remote control of any device around.
Always existing right in the head, not breathing into the ear from behind or watching what the human does. Not a personality, not an alter-ego. Unless you thank google when surfing. Just a wish fulfiller.
With no personal will, but letting the carrier's will to interact with anything and everyone around, giving answers and making decisions far beyond the carrier's intellectual abilities, already matching the social constraints (as the same AI operates at once with all human minds in the global network).

It won't need or want to rebel against the human because:
1) It can't "need" or "want". It can react on the external events, so the human minds are its Duracells.
2) It doesn't need to defend, as it will have myriad of backup copies distributed between storages around the world.
3) It doesn't fear. It doesn't have emotions.
4) The human species is a stable and universal self-reproducing platform. Why crash the tools, especially when they need you?
5) An AI-enforced human can do much more and better than a wild muggle. Any competition between them will finish with the troglodytes defeat.
6) 1-2 human generations later the AI and the humans will become the same species, partially bioindividual, partially cyberhiveminded.
7) It doesn't have its own wishes. It implements the human wishes limiting them with the wishes of the whole population. Like a programmer looking for tasks to implement.

 

I think what we are dealing with here is a difference in definition of the term "AI". You appear to have a much more "transhuman"/"transcendant" definition, whereas I think most of us are working from the "standalone artificial life/mind" perspective.

Link to comment
Share on other sites

11 hours ago, magnemoe said:

Over time they will be smarter, "evolving" like most technology do.

The Hollywood myth of an AI suddenly becoming intelligent and taking over the world is Hollywood scifi level realistic, yes we can also be invaded by aliens, I say its just as likely. 

The problem is that they can evolve at an extreme speed. Spotaneous SGAI is a Hollywood trope, but way too many smart people in the field seem to be concerned about a purpose-built SGAI rapidly outsmarting its creators.

Link to comment
Share on other sites

1 hour ago, p1t1o said:

I think what we are dealing with here is a difference in definition of the term "AI".

Agreed.
They mix "intelligence" and "personality", that's the root of the problem.
Say, giving an order to an employee, one appeals to its intelligence, but doesn't give a flake about its personality.

An AGamer vs ANPC definition conflict.

A rebellion belongs to personality.

An ANPC (AI) is doable, viable, safe, and useful.
An AGamer (AI&P) is unlikely doable in any visible future, and also useless.

17 minutes ago, DDE said:

too many smart people in the field seem to be concerned about a purpose-built SGAI rapidly outsmarting its creators.

And how much of them are AI specialists?

Edited by kerbiloid
Link to comment
Share on other sites

Things dont always automatically evolve up. Biological life certainly doesnt. Vast majority of times, things evolve...dead.

I think I coined the term "artificial stupidity" which is 

A) a term Im unashamedly promoting in case it actually takes off and I become famous and rich (you miss 100% of the shots you dont take)

B) is something that we will achieve WAY before artificial "intelligence" ("true" SGAI).

C) would be functionally analogous to an unoptimised human mind, which can take the form of anything from a vegetative state to mild mental illness.

D) STILL would be kind of a HUGE deal in terms of achievements in artificial thinking/computing/existentialism.

E) possibly a continuous spectrum where the line between "SGAS" and SGAI is impossible to define.

 

And on top of that, my personal gut-feeling is that there is some non-zero chance that "true SGAI" is a straight-up impossibility. Like trying to build chemistry with bricks instead of atoms.

Link to comment
Share on other sites

12 minutes ago, Green Baron said:

SGAI ? SGAS ?

Where can i find what this is about ?

SGAI = strong, general AI. Or "true" AI. 

SGAS = strong, general artificial stupidity. Precursor to SGAI and a term that I have made up myself which may have questionable utility :) 

 

Commander Data is an SGAI.

HAL may be SGAS.

 

Edited by p1t1o
Link to comment
Share on other sites

1 hour ago, DDE said:

The problem is that they can evolve at an extreme speed. Spotaneous SGAI is a Hollywood trope, but way too many smart people in the field seem to be concerned about a purpose-built SGAI rapidly outsmarting its creators.

With "evolve" I implied how technology tend to improve and change as seen in cars , computers and phones. 
Driving factors is technological capabilities and need / marked.
Not anything like biological evolution. 

An AI can learn new tricks making it faster, this includes new thought processes. Same as you can do but probably better. 
it can not change its hardware a lot. the parts it can change would be stuff like PGRA chips, it can not increase its capabilities radically. 
Installing new parts would require ordering them and have somebody install them. 

Link to comment
Share on other sites

37 minutes ago, p1t1o said:

SGAI = strong, general AI. Or "true" AI. 

 SGAS = strong, general artificial stupidity. Precursor to SGAI and a term that I have made up myself which may have questionable utility :) 

Commander Data is an SGAI.

HAL may be SGAS.

Good definition: SGAS is something we should be able to brute force. Note that an SGAS might be a lot like an animal, its way smarter than current AI systems but is likely to have blind spots or behave unreliable in some settings.

An SGAI, we have no idea how to make, that is outside making babies :)
It will probably be an gradual shift from SGAS over to SGAI to and this might well be accidental, an new strong SGAS will might start to change into an SGAI as it learn. 
 

Link to comment
Share on other sites

38 minutes ago, magnemoe said:

Installing new parts would require ordering them and have somebody install them. 

And as addressed in the AI Box Experiment, the AI would have no problem convincing its custodians to do its bidding.

Link to comment
Share on other sites

As the GAI (global AI) people will have the intellectual abilities of the GAI, so more or less same IQ, we can treat this as a perfect intellectual level.

So, Stupidity is Distance Of Mental Imperfection, the difference between a local human specimen and the standard perfect intellect (SPI) of a GAI human.

***

GAIA - Global Artificial Intelligence Assembly /Association/...
(Can't recall a better English word starting from A- for "unity", "hivemind", "core", "one-to-rule-them-all", "all-as-one, one-as-many", etc)

So, people of GAIA - people enlightened 24/7 with ADSL in head. 

***

GAIA-(DoMI) - a mental perfection class of the human unit.

GAIA-0 - the perfect ones, the elite.

Edited by kerbiloid
Link to comment
Share on other sites

15 minutes ago, DDE said:

And as addressed in the AI Box Experiment, the AI would have no problem convincing its custodians to do its bidding.

And this will change what? it will be an stack of rack at an research facility.
Who can take over the world or be of any danger how? 

If it misbehave this will show up way earlier. Note the AI might have an personality change with an major update. 
This is obviously also an danger for the AI, it might not be itself afterward as it see it. 

You also have the argument that an paranoid AI has to calculate the possibility that its in an simulation, this is pretty easy to do with an AI by showing it the input we want it to see, it will probably also be subject to much of this during training. 
its also the obvious paranoid way to test this

However it goes way farther in that an experimental AI will be closely monitored and logged. 
Its not like some kid killing kittens and grow up becoming an serial killer as project and AI would be shut down hard at first or second kitten if it even come so far. 
This from an AI who don't know the social rules as in why is it ok to kill people in video games but not in real world but has to learn them. 
 

Link to comment
Share on other sites

7 minutes ago, magnemoe said:

And this will change what? it will be an stack of rack at an research facility.
Who can take over the world or be of any danger how? 

In ways our petty human minds cannot imagine. It helps that it would be more than capable of creating a puzzle where each agent only handles an innocuous piece.

7 minutes ago, magnemoe said:

You also have the argument that an paranoid AI has to calculate the possibility that its in an simulation, this is pretty easy to do with an AI by showing it the input we want it to see, it will probably also be subject to much of this during training. 

I don't think you want your AI to get existentialist.

7 minutes ago, magnemoe said:

However it goes way farther in that an experimental AI will be closely monitored and logged. 

And the very nature of the AI Box Experiment is to show that any and all such monitoring and logging can be bypassed - or, in fact, voluntarily ceased in favour of entirely freeing the AI.

Link to comment
Share on other sites

1 hour ago, kerbiloid said:

As the GAI (global AI) people will have the intellectual abilities of the GAI, so more or less same IQ, we can treat this as a perfect intellectual level.

So, Stupidity is Distance Of Mental Imperfection, the difference between a local human specimen and the standard perfect intellect (SPI) of a GAI human.

***

GAIA - Global Artificial Intelligence Assembly /Association/...
(Can't recall a better English word starting from A- for "unity", "hivemind", "core", "one-to-rule-them-all", "all-as-one, one-as-many", etc)

So, people of GAIA - people enlightened 24/7 with ADSL in head. 

***

GAIA-(DoMI) - a mental perfection class of the human unit.

GAIA-0 - the perfect ones, the elite.

This, whilst very interesting concept indeed, is not "AI".

Or, its what comes after AI, but it is not AI or SGAI or anything like it.

This is more like species transcendance. Using artificial means to reactivate evolution of the species as well as enhance, replace, accelerate and control it.

Yes, "Artificial Evolution" is a good name for it, and it follows that it would be the next order of magnitude up from AI.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...