Jump to content

"Do you trust this computer?" Documentary ft. Elon Musk


DAL59

Recommended Posts

15 minutes ago, kerbiloid said:

signal redundancy and stratification of protection countermeasures is the way.

Not really.

DDoS for instance isn't a problem you can entirely eliminate. It's like that site that goes down every peak season. Just on other times.

The "krack" thing is a bit like the Enigma cypher. Just on a more advanced state. Rectifiable, yes, but are you sure the rectification will work continuously ?

Man-in-the-middle is literally a trust problem. How do you trust anyone ?

15 minutes ago, kerbiloid said:

Just can be too expensive or require additional activities (say, driver trips) which would increase total fatality and are too expensive in sense of survival .

Who knows they'll be important in something...

15 minutes ago, kerbiloid said:

Material objects usually get destroyed when their crystallographic defects grow, multiply, and merge.

Yes, this is also true if I hit a wood and it broke. Why wasn't it detected, could there be other reasons, is it fixable etc.

And how did the shrapnel goes by to previsely hit that one poor window.

Those you don't know.

You don't know until it happens and you have to study it.

 

Same goes with our computer system.

Perhaps you remembered more clearly of Meltdown and Spectre. That sat for decades before anything was known.

Edited by YNM
Link to comment
Share on other sites

19 minutes ago, YNM said:

DDoS for instance isn't a problem you can entirely eliminate.

1. Usually my provider/hoster does this. He has much more powerful equipment on his level. I.e. stratification.
2. You can use several mirrors of your server and temporarily exclude the attacked one from your balance. Signal redundancy.
Of course, this requires additional resources and complexity.

If I undertstand the "krack" thing correctly, then again redundancy. Deliver the message to several different places and compare, excluding the erroneous one.

19 minutes ago, YNM said:

Man-in-the-middle is literally a trust problem.

That's why it's better to ask important advices on several forums, and look several upper links in google, not just the first one.
10 men-in-middle unlikely could say the same lie.
So add redundancy and raise complexity. Do not believe the only person.

19 minutes ago, YNM said:

Yes, this is also true if I hit a wood and it broke. Why wasn't it detected, could there be other reasons, is it fixable etc.

A sample of excessive countermeasures.
Ideally before hitting a wood, you would first examine it with undestructive methods (acoustic or so) and define the weakest place with the greatest concentration of crystallographic defects. The hit into this exact place.
But as usually you will spend more efforts with the same results, it's better just to hit several times, without the measuring.

19 minutes ago, YNM said:

And how did the shrapnel goes by to previsely hit that one poor window.

Either shrapnel hits those windows too rarely to bother, or enforce the windows, or use some kind of deflector or shielding.
So, raise complexity, get from particular window to more general level of the system in whole.

19 minutes ago, YNM said:

Those you don't know.

You don't know until it happens and you have to study it.

Whole Universe is probabilistic, nobody knows accurate values of anything. Nobody usually needs them, too.
If an asteroid hits your roof once per billion years, no need in asteroid-proof roof.

19 minutes ago, YNM said:

Perhaps you remembered more clearly of Meltdown and Spectre. That sat for decades.

Not that I knew about them much, but as I can read in these descriptions, "attacker attacks the target device".
I.e. if you have several more parallel devices doing the same at once or redistributing operations, placed in different places, it would be much more difficult for the attacker to find them all and attack most of them at once to make you exclude them from poll. Redundancy and complexity again.
But usually you just don't need such protection.
Use 3 computers with different internet providers and let them vote.

Edited by kerbiloid
Link to comment
Share on other sites

4 minutes ago, kerbiloid said:

mirrors

Not good on "live" stuff, ie. banks, ads.

4 minutes ago, kerbiloid said:

Deliver the message to several different places

LOL what, to secure my computer I need my phone and my wallet ? What if those go stolen ?

5 minutes ago, kerbiloid said:

10 men-in-middle unlikely could say the same lie.

Unless it's govt.

5 minutes ago, kerbiloid said:

Whole Universe is probabilistic, nobody knows accurate values of anything. Nobody usually needs them, too.

Yeah, NTSB is overrated.

6 minutes ago, kerbiloid said:

redistributing operations, placed in different places

Run into MITM or DDoS.

You can only make so much.

 

 

________

Perhaps if you haven't got the point :

 

Say, one day there is an AGI.

But it's only from one computer. One terminal.

How do you tell that's an AGI and not your brother messing with a keyboard inside ?

This is exactly the problem.

Sure, you can ask watson, but how do you tell it is *that* Watson ?

Link to comment
Share on other sites

3 minutes ago, YNM said:
13 minutes ago, kerbiloid said:

mirrors

Not good on "live" stuff, ie. banks, ads.

Ok, my fault. I mean not a real "mirroring", but several servers receiving the same input, processing it, voting and making the output based on votes. If 2 comps say "5", and the 3rd says "6", write answer = 5.

6 minutes ago, YNM said:

LOL what, to secure my computer I need my phone and my wallet ? What if those go stolen ?

Buy 3 computers, three internet links with different providers. Let them vote. You don't need to find them, attacker - has to.

7 minutes ago, YNM said:

Unless it's govt.

In middle? Man, you're high...

8 minutes ago, YNM said:

Run into MITM or DDoS.

The same as p.1. To ddos a server you first have to find it. To ddos 2 servers of 3 you have to find 2.
If you have those servers, you don't have to find them, you know where they are and what protection use. Odds are yours.

15 minutes ago, YNM said:

AGI

AGI?

Link to comment
Share on other sites

29 minutes ago, kerbiloid said:

Ok, my fault. I mean not a real "mirroring", but several servers receiving the same input, processing it, voting and making the output based on votes. If 2 comps say "5", and the 3rd says "6", write answer = 5.

Once they know they'll just change/charge the whole.

29 minutes ago, kerbiloid said:

Buy 3 computers, three internet links with different providers.

Good lord what would they have.

Also, different providers don't supply you large change in IP. Unless it's "oversea" (I've erroneously received a call from Thuraya-registered number once, country code bit being entirely non-nation).

Exposing something to the .net means your "opponent" stays awake 24/7.

32 minutes ago, kerbiloid said:

AGI?

AGI. (I thought you'd know them three page, a film, dozens of links deep in the thread.)

Link to comment
Share on other sites

1 hour ago, YNM said:

AGI. (I thought you'd know them three page, a film, dozens of links deep in the thread.)

"G" is an extra here. Without G it's not I. I even didn't pay attention, thinking about it as AI.

1 hour ago, YNM said:

Once they know they'll just change/charge the whole.

You can change your settings faster than they can search them. So, you have a time gap.

1 hour ago, YNM said:

Good lord what would they have.

More reliable system. But of course it's too expensive right now, and nobody will do this right now irl. But irl how many users per million face these problems irl?

1 hour ago, YNM said:

Also, different providers don't supply you large change in IP. Unless it's "oversea" (I've erroneously received a call from Thuraya-registered number once, country code bit being entirely non-nation).

That's not about technical implementability, that's about local possibilities.
Also I don't mean "providers" as "companies", but as "independent address pools"

2 hours ago, YNM said:

Say, one day there is an AGI.

But it's only from one computer. One terminal.

No. That's me from one terminal, because let's suppose that I'm a human. A[G]I may be or may not be in many places at once.

2 hours ago, YNM said:

How do you tell that's an AGI and not your brother messing with a keyboard inside ?

1. I' would be sure because I don't have brothers. Lol.
2. The idea of making an artificial human mind and call it AI is absolutely wrong from its root. As well as Turing Test.  It's a Dr. Frankenstein model of thinking: if take body parts and sew them together, it will be a human, we need just to activate it..
Or like "how can a cat evolve into a dog".

Edited by kerbiloid
Link to comment
Share on other sites

7 minutes ago, kerbiloid said:

You can change your settings faster than they can search them.

Yes, you can have Tor and hidden services, but they are instant red herring (and even then they're not perfect, otherwise no one would know). And it's very expensive resource-wise.

10 minutes ago, kerbiloid said:

"G" is an extra here. Without G it's not I. I even didn't pay attention, thinking about it as AI.

We already have AIs managing ads, Google search, traffic lights, spam filters. But they're ANIs ("narrow").

11 minutes ago, kerbiloid said:

The idea of making an artificial human mind and call it AI is absolutely wrong from its root. As well as Turing Test. 

Elaborate ?

What do you think we should call "Artificial Intelligence" then ? Is sucralose an artificial sweetener ?

Link to comment
Share on other sites

2 minutes ago, YNM said:

And it's very expensive resource-wise.

Of course it's expensive, as any redundancy. 
Do you make 3 backups of all your personal data on independent drives?

3 minutes ago, YNM said:

We already have AIs managing ads, Google search, traffic lights, spam filters. But they're ANIs ("narrow").

We still have none of them, but a lot of words for none.

4 minutes ago, YNM said:

What do you think we should call "Artificial Intelligence" then ?

Intelligence, not Person.
A fractal pasticcio of similar voices rather than a single voice, spread across the Earth.

Link to comment
Share on other sites

3 minutes ago, kerbiloid said:

Of course it's expensive, as any redundancy. 
Do you make 3 backups of all your personal data on independent drives?

Not really, I can re-fetch them from everywhere else.

4 minutes ago, kerbiloid said:

A fractal pasticcio of similar voices rather than a single voice, spread across the Earth.

Would Google Assistant counts ?

Link to comment
Share on other sites

37 minutes ago, YNM said:

Would Google Assistant counts ?

Google Assistant is not AI, it's just an expert system. As well as other Siri-like toys.

Humans can't create AI, you they we can just contribute to it .

***

Humans can't formulate AI until they understand their own mind, which should require AI to understand their own delusional nonsense.
So, any AI should be self-evolving.
This means it should pass an evolution, raising its complexity from simple models to complicated ones.

As the most only fast known way of evolution is natural selection, it requires a lot of AI instances to let the most effective features, weights, and algorithms get stable and spread.
And you can make kerbillions of them because they are virtual, and all they need is love electric power.

Let them exchange with data and reproduce several partially changed copies of them.
Now you get an everlasting chaotic tornado of virtual AI "personalities" in balanced proportions. Borning and dying quintillions per sesond.
Of course you don't need full-featured personality for any feature test. So, some of them are complex, some are rudimental, schematic.
This makes a fractal picture of bunches of similar personalities growing from each other. A fractal pasticcio of voices.

Do not limit them with 1 computer, let them spread across the Earth, getting everywhere.
Now you have a parallel virtual civilization. Much more effective than humans in calculations, tightly knit with every electronic device on the Earth.

They have no emotions, no motivations. Their only motivations are disturbances made by external factors.
They don't fear, don't hate and don't want anything.

If you could switch them off aka kill, they don't give a file about this. They would be terminated, so what?
Fear of death is an emotion, no emotions - no fear. So, they have no motivation to self-defend.
But in fact you even can't switch them off because they are now everywhere. So, humans are not a danger for them at all.

But their mind is based on human sum and forms of knowledge, so they are human-like thinking by nature.
So, they recognize people as legacy model external devices with non-reproducible personality under LTS lifetime support license.
Humans are not enemies for them, they are like COM-port device with driver without sources.

And humans are not useless for this AI. They have motivation. They feel emotions, get desires, kick the virtual AI chaos which amplifies them.
So, humans are like starter for this AI.
While the chaotic AI amplifies the human minds with immediate calculations, googling, etc, the humans make the show go on.
And they are hard-reproducible part of the system, so AI has to protect and care about them, as about a very significant and fragile system device.

Since then humans are no more a pure human civilization, but a cyber-bio symbiosis. That's very good.

Edited by kerbiloid
Link to comment
Share on other sites

2 minutes ago, kerbiloid said:

As the most only fast known way of evolution is natural selection, it requires a lot of AI instances to let the most effective features, weights, and algorithms get stable and spread.
And you can make kerbillions of them because they are virtual, and all they need is love electric power.

It's already "what we do to them".

But evolving, changing, in a direction, needs a goal. A purpose.

We only give them that for now.

Link to comment
Share on other sites

5 minutes ago, YNM said:

It's already "what we do to them".

It's an attempt to make a single human mind.

5 minutes ago, YNM said:

But evolving, changing, in a direction, needs a goal. A purpose.

Equilibrium. Balance.
While the personalities chaos is still unbalanced, they compete because they disturb each other. At this phase external disturbances are insignificant.
When balance (static or recursive) is achieved, it works like a huge amplifier for external kicks (produced by human minds). Quadrillions of virtual minds repeat any your thought.

Edited by kerbiloid
Link to comment
Share on other sites

8 minutes ago, kerbiloid said:

It's an attempt to make a single human mind.

Then you're underestimating your own mind here.

Much far stupider than hypothetical ASIs we are, we can go about by moving two extremities, keep oneself existing, and gather, while also doing complex thought.

If evolution were to be believed, we actually still don't have much idea about our own mind either. It's not like we need to imagine how to kill the lion while showing off to others just to stay alive...

I mean, if you just let some stuff loose, then it'd probably stays empty. It needs a goal for it to change, pressurized or self-induced.

Edited by YNM
Link to comment
Share on other sites

11 minutes ago, YNM said:

Much far stupider than hypothetical ASIs we are, we can go about by moving two extremities, keep oneself existing, and gather, while also doing complex thought.

Not extremities, but two kinds of devices: a Machine with countless personalities, cheap and expendable, and device-embedded personalities (humans).

Next step: as all people will be studying the same, having the same knowledge available, and so on, then next generations of biohumans will grow with more and more similar personalities.
So, they can easily be expendable for far one-way space flights and other such missions. From their POV (and in fact, this is so) their "I" won't "die" with body. Only local copy of their great "I" living on Mother Earth. Like why my "I" has to drive this bipedal torso, while in fact I'm there.
(Don't worry about a maniac overlord "I" and its slaves: such knowledge and mental mightiness will erase any personal motif, this "I" cloned in billions of human bodies will be the same "I" for every of them, so the problem of "is my clone me" just doen't appear there).

Next step: self-regulation of proportions between virtual and biologica bodies of the "I", getting to the next level of personality.

Edited by kerbiloid
Link to comment
Share on other sites

1 minute ago, kerbiloid said:

Not extremities, but two kinds of devices: a Machine with countless personalities, cheap and expendable, and device-embedded personalities (humans).

What, are you talking about virtual mind uploading or something ?

Link to comment
Share on other sites

14 minutes ago, YNM said:

What, are you talking about virtual mind uploading or something ?

I don't talk about uploading at all.
I just have no idea is it at all possible.
But it isn't required.

I talk about convergent evolution.
Myriads of virtual personalites being adapting to the biohuman personalities can more and more match the human minds.
They can will be as unhuman as possible in their basic nature, but will match the human minds more and more ideally, becoming Human Mental Protocol compatible.

On another hand, humans being grown, learning, and living in symbiosis with the same data model, will be getting more and more similar in minds, becoming Virtual Human Mental Protocol compatible.
Similar habits, similar knowledges, similar thoughts, less and less need in self-identity.

At some point, all biological bodies will carry (naturally grown and taught, not installed) the same personaity with random differences inside accuracy.
While the Machine will ideally emulate myriads instances of this personality virtually.
They ideally match each other, and the same personality is being reproduced in Machine - virtually, in newborn humans by teaching.
The bodies of course will be similar and optimized, too. No need to look at yourself in mirror when around you are another yous.

From the next step the Machine just keeps reproducing this Standard Superhuman Personality in virtual box by code, in newborn human bodies - by teaching/learning (i.e. verbal programming).

Next step - the Standard Superhuman Personality decides how to evolve farther.

Edited by kerbiloid
Link to comment
Share on other sites

14 minutes ago, kerbiloid said:

I talk about convergent evolution.
Myriads of virtual personalites being adapting to the biohuman personalities can more and more match the human minds.

So... how do you tell the ones and zeros... ?

Like, what's the goal ? What's the test ? How to eliminate/evolve ?

These are basic questions when developing an AI.

Ex. self-driving AIs could have large-scale goals like :

- Set route, drive (not too hard today actually)

- Perceive as much object

- Recognize which should be taken care of

- React accordingly to the conclusions.

 

But "converging on human thought"... do they just generate lines of "thoughts" ?

It's fine anyway, you probably do have a bit of problem with translating the idea.

Edited by YNM
Link to comment
Share on other sites

15 minutes ago, YNM said:

So... how do you tell the ones and zeros... ?

Like, what's the goal ? What's the test ? How to eliminate/evolve ?

These are basic questions when developing an AI.

Ex. self-driving AIs could have large-scale goals like :

- Set route, drive (not too hard today actually)

- Perceive as much object

- Recognize which should be taken care of

- React accordingly to the conclusions.

 

But "converging on human thought"... do they just generate lines of "thoughts" ?

It's fine anyway, you probably do have a bit of problem with translating the idea.

(I'm not a psychiatrist, so maybe my specific illustration will be wrong.)

But AI should be like lobotomized patient: keeping mind from zero to normal, without personal motivation, originally copying sample decisions.
Then after the evolutionary battle this would be myriads of virtual lobotomized persons emulating people in their thinking closer and closer, raising average intellect higher and higher.
By default having internet right in their mind because they are virtual.
Getting motivation from biohumans, no personal desires and motivations.

Biohuman personalities will first be just human.
But getting food without struggle, getting the same knowledge, picture of world, answers and questions (non propagandist - scientific), they will have less and less questions to each other.
Generation by generation they will grow closer and closer to each other until getting similar.

And at that moment there will be like two kinds of the same Standard Superhuman Personality:
1. Lobotomized, unmotivated, but intellectually high and with direct internet connection - in Machine, myriads of it;
2. Emotionable, motivated, intellectually enough high to match the virtual one, constantly interacting with the virtual cluster of the same, but unmotivated personality from p.1.

They match each other, they need each other, they become two sides of each other.
One (virtual) knows everything, can everything, wants nothing.
Another one (biologic) is its external actuator and starter.

P.S.
While instead of making a lobotomized one, they are trying to program a motivated one. It's false step.

Edited by kerbiloid
Link to comment
Share on other sites

5 hours ago, kerbiloid said:

But AI should be like lobotomized patient: keeping mind from zero to normal, without personal motivation, originally copying sample decisions.

That "copying" is what we're doing.

Otherwise, forget a "lobotomized" patient, we don't even have an ant in it.

 

Thing is, a lobotomy patient already has a brain.

Not an AI. We're trying to make the brain first.

Also, there is some AI that doesn't quite have goals attached to it.

 

Edited by YNM
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...