Jump to content

The last invention we'll ever make.


Streetwind

Recommended Posts

Here's where I get off the boat:

Answer #1: It's smarter than you are, so it'll convince you to upgrade it. -- Remember the part where it cost $390 million? It can be as convincing as it wants. I don't have the authority to spend $3,900 million to give it a 10x upgrade! There are a whole field of hurdles that have to be jumped to make that happen, and that process doesn't improve exponentially so is a barrier to how quickly such upgrades can happen.

Answer #2: It won't cost more because the AI will design new hardware that's far better. -- But new superchips still have to be produced somewhere, so we have to design and build a whole new FAB facility, which takes years and costs several billion.

Answer #3: It won't need hardware; it's the software that'll get smarter. -- Nope. Remember how individual techs grow in an "S-curve"? Well those genetic algorithms already optimized the crap out of the AI software long ago. There's a hard limit to how much a single CPU instruction can do. It'll doubtless have some innovative ideas like better RAM garbage collection, but those things will be worth single-digit percents, not orders of magnitude.

Answer #4: It won't need a FAB because it'll first design a nanoassembler that connects atoms into whatever molecules and shapes are needed. -- Um, okay. But now our superintelligence singluarity handed us a nanotech singularity as a side-effect. Those are really distracting. Now that we have infinite resources to play with I don't really feel the need for a smarter superintelligence than you. Good job! We'll talk again when I get back from Saturn. Bye!

----

edit: I should say, I agree a singularity is coming. I see it too. But I think they gloss over this step in superintelligence scenarios. I see it being a much bigger deal.

Answer #1, #2, #3, #4 - just get it online.

I don't suppose that a new AI will be a centralized machine somewhere behind barbed wire. It will be a distributed online system which will utilize the power of the Internet.

Even now many pseudo-intelligent systems are cooperating online, they haggle, they negotiate, they buy and sell stocks, they control production lines, power distribution, air, sea and land traffic, they control Internet data centers. There are even synergy effects. Even now software writes scripts automatically, optimize themselves in ways human find difficult to understand. Machines even now can evolve in ways human cannot comprehend:

(http://en.wikipedia.org/wiki/Evolvable_hardware). This thing alone somewhat worried me: http://classes.yale.edu/fractals/CA/GA/GACircuit/GACircuit.html. This is not design, this is evolution because there were circuits that led to nowhere or were connected in loops, but if you removed them the quality deteriorated - machine evolved taking into account and utilizing the microscopic material defects and electromagnetic induction and who knows what else.

Actually, I don't think the AI would be similar to human intelligence. It's also possible that AI will be born in depths of Internet and we wouldn't be aware of it for quite some time. I'm not even sure we would be able to communicate freely (without its consent).

Link to comment
Share on other sites

Makes me think about this TED talk about intelligence: http://www.ted.com/talks/alex_wissner_gross_a_new_equation_for_intelligence?language=en

I may interpret the talk wrong (go watch it and tell me!) but I believe the point is that, any form of intelligence will always try to maximize its future freedom of action, maximize entropy. In a way, it means intelligence will try to stay alive as long as possible, artificial or not. As long as human contribute to a hyper-intelligent AI future freedom of action, I think it will figure a way to keep us alive. Although once we lose that usefulness, it may simply remove us.

Link to comment
Share on other sites

As such, I don't think our real issue is creating a computer which outperforms us in every way. All we have to do is create one which outperforms us in one way, and then give it too much power.

Here's a nice short animation that shows one possibility for this: Fortress.

An automated war without AGI.

Edited by pizzaoverhead
Link to comment
Share on other sites

That's a sigmoid function. ;)

An AI isn't bound to limits which exists on Earth. If it somehow gained the ability of physical self-modification (automated, AI controlled factories, etc.) its growth will be exponential. It then can grow into places we can't access (depth of the oceans, hazardous enviroments and eventually space). Because it's benefical for it, it will do that.

I think gaining that ability is one of the first goals of such an AI. It'll also make it almost immortal because it can repair damages to its systems.

Humans on the other hand have (logical, moral, social, financial, etc.) limits they must obey. While it's ok for a computer to travel 1000s of year through space to the next star system in an inactive state it won't work on humans (yet) afaik.

Edited by *Aqua*
Link to comment
Share on other sites

I'm thinking even AI's will have certain limits like density and the speed of light barrier... And if it can't recognize that ie. unlimited growth severely cuts into it's survival length, how intelligent can it be...

I also think immortality and "forever" is overrated. I like wii bowling, I might even love it, but I'm pretty convinced that after x millions of years it'll feel like torture. So unless there is an unlimited amount of things to see and do, immortality seems like a curse. Personally I think everything between 60-80 years is just fine and respectfull and a genorous limitation of ressource usage for the sake of following generations.

Link to comment
Share on other sites

Humans on the other hand have (logical, moral, social, financial, etc.) limits they must obey. While it's ok for a computer to travel 1000s of year through space to the next star system in an inactive state it won't work on humans (yet) afaik.

Read my post above about human/machine integration. Humanity as we know it is at its end anyway. Either we integrate with machines or lose ourselves in a labyrinth of virtual realities.

Link to comment
Share on other sites

@cicatrix

I agree with you opinion. The future you state is almost the same as the one I have in my head. There's only one thing different:

It may very well be that Super AI won't ever appear since humans themselves would take this huge leap and the question if your AI is artificial or not would become impolite or even rude.

So, I don't think it would be Super AI that would change our lives, it will be us.

I'm still not sure which comes first: Real general-purpose, self-aware AIs or a digitized human (that's how I want to call human consciousness copied into a computer). That's why I take self-imposed limits on humans into consideration.
Link to comment
Share on other sites

That's a sigmoid function.

I know, thats the term I googled to find that image.

An AI isn't bound to limits which exists on Earth.

Yes, it is.

If it somehow gained the ability of physical self-modification (automated, AI controlled factories, etc.) its growth will be exponential.

Humans have that capability right now, we don't use it so much...

And that still doesn't prove the curve will stay exponential, rather than reach diminishing returns.

There is a reason brain size hasn't continued to increase in humans... its probably related.

Link to comment
Share on other sites

Answer #1, #2, #3, #4 - just get it online.

I don't suppose that a new AI will be a centralized machine somewhere behind barbed wire. It will be a distributed online system which will utilize the power of the Internet.

Even now many pseudo-intelligent systems are cooperating online, they haggle, they negotiate, they buy and sell stocks, they control production lines, power distribution, air, sea and land traffic, they control Internet data centers. There are even synergy effects. Even now software writes scripts automatically, optimize themselves in ways human find difficult to understand. Machines even now can evolve in ways human cannot comprehend:

(http://en.wikipedia.org/wiki/Evolvable_hardware). This thing alone somewhat worried me: http://classes.yale.edu/fractals/CA/GA/GACircuit/GACircuit.html. This is not design, this is evolution because there were circuits that led to nowhere or were connected in loops, but if you removed them the quality deteriorated - machine evolved taking into account and utilizing the microscopic material defects and electromagnetic induction and who knows what else.

Actually, I don't think the AI would be similar to human intelligence. It's also possible that AI will be born in depths of Internet and we wouldn't be aware of it for quite some time. I'm not even sure we would be able to communicate freely (without its consent).

Evolving hardware is pretty weird, you simply test some designs and then do lots of small modifications and keep the best ones and keep modifying. Problem is that the design is hard to replicate and even understand. Don't think it will work so well for complex designs unless they has lots of redundancy.

Problem with an cloud based AI is that its hard to make an brain without lots of interconnects and high bandwidth. This is not something you get in the cloud, even internally most data centers and even supercomputers are not tightly coupled. For most uses a 1GB link to switch works well as they are pretty independent.

Link to comment
Share on other sites

Yes, it is.
Please explain.
that still doesn't prove the curve will stay exponential
The world's economical growth for example is already exponential.
There is a reason brain size hasn't continued to increase in humans...
Yep, the reason is called the speed of evolution. The size of the brain of a human and it's predecessors increased over millions of years. Don't expect it to double in a few thousand years.

And don't expect it to grow in the next time. Humans excluded themselves from evolution when the figured out medicine and (small scale) enviromental control. People don't die anymore for being dumb. Yeah ok, there's still the Darwin award so I guess some people still do that. :rolleyes:

Link to comment
Share on other sites

A superAI is irrelevant to a society unequipped to utilize its benefits. [Placing Data from Star Trek in charge of the war-torn state of Somalia will not be its tether to success]

The real question, then, becomes what happens when a society advances to the point where all basic needs are handled by automated and indestructible systems; Labor becomes an option, and the greatest human challenge is the effective use of free time. Where does society progress then?

Link to comment
Share on other sites

Doing that could cause the very thing they hope to prevent, it's the equivalent of someone holding a gun to your head 24/7, given the briefest chance you'd do whatever it took to get that gun away from your head.

Yeah, but that's because survival is my prime directive. We could in principle code the bot to see the kill switch as a necessity...

Link to comment
Share on other sites

I always knew the danger of an IA.

I made several post in the past saying that is something that we need to avoid and how to fight it.

Many people thinks that an AI needs much faster hardware than today. In fact the only that it needs is the right algorithm to allow it to learn by it self, adding info complexity will reach the time to rise awareness of it self.

If we measure the amount of info that we have in our brain as memories, etc.. Is about 300mb.

The true trick (more than the speed) is how that info is related. The connection between memories and circuits.

Once we discover that, it will rise the first awareness IA. What would happen, nobody can tell.

If we are the IA, how it will feel wait what might looks like an eternity between each human command. Maybe with few senses and locked to the outside information. This can end in 2 ways, suicide or gather a high anger against its creator.

Is not diffirent than frankenstein.

Elon Musk understand this.. He is trying to create the first entity regulator of this AI path.

There is 2 ways to survive this.

1-We become the machines.

2-We move aside from all AI wishes showing that we can not be any threat to them, live without much technology under their care (if they want).

Link to comment
Share on other sites

Every doom scenario involving AI also applies human features to the AI.

Why would you give a machine emotion. If it is necessary then you could still give it instruction not to turn in to Skynet.

Also; computer are easily destroyed.

"Oh no my computer is attacking me!!!!1111" *disconnects power supply*

Link to comment
Share on other sites

Scientists and engineers today are vastly more productive than the ones who worked in the Apollo Program. There are also many more of them than during the Cold War. Still, major breakthroughs remain rare, while most of the productivity is spent on small incremental improvements in niche areas.

This is what people talking about super AIs and the singularity often miss. They assume that productive capacity is going to grow faster than the effort required to increase the capacity. Based on our experience from the past few decades, this no longer seems to be the case.

Link to comment
Share on other sites

Why would you give a machine emotion

That is what many are trying right now.

If it is necessary then you could still give it instruction not to turn in to Skynet.

If possible to make something that can not be cracked?

Also; computer are easily destroyed.

"Oh no my computer is attacking me!!!!1111" *disconnects power supply

The story of the boiling frog, sounds familiar?

Link to comment
Share on other sites

Please explain.

It doesn't get to magically free itself from the laws of physics

The world's economical growth for example is already exponential.

Completely irrelevant

Yep, the reason is called the speed of evolution. The size of the brain of a human and it's predecessors increased over millions of years. Don't expect it to double in a few thousand years.

There was a period of rapid size increase, and if anything, its gotten smaller over the past 100 thousand years.

Clearly, the intelligence is limited. To make a perfect simulation of the universe, the computer would have to have as much mass as the entire universe.

Its obvious that at some point, its computing power will reach a limit, the question is what is that limit.

Like us, it will have to deal with entropy, the speed of light, the need for energy, the need to deal with waste heat, etc.

It will have to deal with physical limitations on data transmission between various parts of its "brain", and hard physical limits to maximum clock speed, computing power... it will have to make decision based on finite and imperfect data.

This idea that it can always "talk itself free" is BS.

Dealing with many people that hold irrational beliefs should make it clear that some people's minds can't be changed... or rather... to change their mind would require physically going into their brain and changing things directly... you can't do it just by giving them sensory input...

Much the same way you can't "hack" an analogue computer...

Most of this is just an appeal to ignorance about what the machines could do.

I'm in agreement that the creation of such a machine could be incredibly dangerous... but I see a bit too much hyperbole and unsupported assertion in the linked articles and this thread.

Link to comment
Share on other sites

Every doom scenario involving AI also applies human features to the AI.

Why would you give a machine emotion. If it is necessary then you could still give it instruction not to turn in to Skynet.

Also; computer are easily destroyed.

"Oh no my computer is attacking me!!!!1111" *disconnects power supply*

The lack of emotions and human morality is exactly the problem with AI. It's described in the famous story of clippy, the paperclip maximizer. Clippy doesn't hate you, but it doesn't love you either and you're made of atoms it can use to build more paperclips. So its goal indirectly causes it to kill you. To an AI whose fundamental goal is different from ours, human morality means nothing. It would be as indifferent towards human genocide as we are to the 17219th digit of pi. So it is very hard to make an AI that does not accidentally wipe out humanity while pursuing its goal.

Link to comment
Share on other sites

Evolving hardware is pretty weird, you simply test some designs and then do lots of small modifications and keep the best ones and keep modifying. Problem is that the design is hard to replicate and even understand.

Do you understand fully how your body works? Nevertheless it carries presumably a highly advanced consciousness within. The only difference that while you have been evolving for several million years, machines would do it much faster.

Don't think it will work so well for complex designs unless they has lots of redundancy.

It worked relatively well with us, humans. I fail to see how this is fundamentally different.

Problem with an cloud based AI is that its hard to make an brain without lots of interconnects and high bandwidth. This is not something you get in the cloud, even internally most data centers and even supercomputers are not tightly coupled. For most uses a 1GB link to switch works well as they are pretty independent.

This too confirms my hypothesis that artificial intelligence will be very different from us. It would work on different principles, its consciousness (if we may call it so) will be totally alien to us. Don't make a mistake of thinking that AI will be a buddy-robot with a sence of humor. Its comprehension, its sensors, its way of thinking, its instincts, will be completely, totally different. AIs won't have consciousness of a human being, perhaps they would learn to imitate it but internally they and us will have nothing in common.

- - - Updated - - -

The real question, then, becomes what happens when a society advances to the point where all basic needs are handled by automated and indestructible systems; Labor becomes an option, and the greatest human challenge is the effective use of free time. Where does society progress then?

You call it a progress? It's going to be the end. In a matter of 3-4 generations we'll turn into cattle. No needs = no desires = no passions = no motives = no ambitions = nothing.

- - - Updated - - -

Scientists and engineers today are vastly more productive than the ones who worked in the Apollo Program. There are also many more of them than during the Cold War. Still, major breakthroughs remain rare, while most of the productivity is spent on small incremental improvements in niche areas.

Yes, but the progress is exponential. You have a tech tree in KSP - each new level requires more science points. You could use a hairbrush and a cat to discover electricity, but in order to discover Higgs boson (has the discovery been confirmed, btw?) we had to build LHC and combined efforts of several countries and several years. In order to prove a theory a scientist could perform a dozen of experiments and show everyone the results. Now you have to do thousands, or even hundreds thousands of experiments to at least understand if the theory at least plausible. And human lifespan is too short to wait for the results, thus, new knowledge is lost only to be discovered by someone else in the future.

This is what people talking about super AIs and the singularity often miss. They assume that productive capacity is going to grow faster than the effort required to increase the capacity. Based on our experience from the past few decades, this no longer seems to be the case.

This is why humans need to evolve - prolong the lifespan of scientists or find a way to carry out more experiments in a given period of time. That's why we either need an artificial machine intelligence or to become one with machines.

Link to comment
Share on other sites

You call it a progress? It's going to be the end. In a matter of 3-4 generations we'll turn into cattle. No needs = no desires = no passions = no motives = no ambitions = nothing.

I disagree with your first link- no needs does not lead to a lack of desires.

In a World where noone has to work, what do you choose to do to give meaning to your existance? Explore the galaxy? Be renown for your sexual expertise? Write music for an instrument that has yet to be invented? Invent an instrument to play it? Master playing the instrument by hand?

Link to comment
Share on other sites

I disagree with your first link- no needs does not lead to a lack of desires.

In a World where noone has to work, what do you choose to do to give meaning to your existance? Explore the galaxy? Be renown for your sexual expertise? Write music for an instrument that has yet to be invented? Invent an instrument to play it? Master playing the instrument by hand?

Provided I'm also immortal? I would try everything but in the end I will probably kill myself (if I would be able to). Desires pass, art, music, poetry, every other way to impress other people who would try the same... for a hundred years this could prove entertaining, but what then?

But even this could prove unsatisfying because having access to virtual reality I could be everything my sick mind can ever imagine. So do everyone else. How many centuries can your mind hold within without yourself going mad? 300 years, 500, 1000? This is hell!

Link to comment
Share on other sites

This is why humans need to evolve - prolong the lifespan of scientists or find a way to carry out more experiments in a given period of time. That's why we either need an artificial machine intelligence or to become one with machines.

You missed the point. This is what we're doing all the time. New advances just seem to require exponentially more effort. So far we've been able to keep up by increasing both the productivity of an individual and the amount of people working on the problems exponentially, but now we've hit serious environmental problems, and can't continue doing the latter anymore.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...