Jump to content

Everything wrong with our predictions (The Singularity is coming)


AngelLestat

Recommended Posts

This post will be very long, but I think it is important that we get involved, which does not mean that we can do something about it, but we can be more prepared to face that final step as human beings.
(Sent me a pm if you find some English mistakes that makes a sentence unclear)

Introduction:

If I need to make a prediction, I like to include as many variables and data as my brain allows me.
But there is always a particular variable which I choose to ignore, because if it is unleashed, it destroys any possibility of accuracy in the prediction.
That variable is the moment when our technology escape from the limits imposed by our brain. In where the research and conclusion is done by the same technology which can improve itself in a positive feedback loop; generating an exponential explosion.
To understand how big will be this change, we need to know first how our technology and human capacity evolved in the last 50000 years.
Our brains hardly changed in this time frame. We already had language to help us to share discoveries, but it was not until the writing that we became more efficient in knowledge accumulation.
Machines, population, cheap energy; all played an important role in transforming our linear slow growth in something more exponential.
But our brain is still the same, our intelligence did not develop for visualize and understand complex concepts beyond our everyday reality, due this, we depend a lot on the experimentation to move on.

The new age:

We already enter in the age of self machine learning using neural networks and evolution principles.
In case someone doesn't know, all the latest biggest software advances like speech recognition, image recognition, concept understanding, new search algorithms; between others, was achieved by these new neural networks structures.
The trick was to mimic the things we know about real neurons and our way to learn, which is all based on how data are related.

How neural networks work:

Spoiler

They work similar to normal neurons, all the sensorial impulses reach some neurons, then the brain starts to create patterns (neural connection when the info is related), for example, from one set of colors and shape (which is the combination of many neurons) to one neuron that represent that pattern, then at similar time and moment we have a pattern of sounds and smell, which are also connected to a particular neural for sounds and smell, then these 3 neurals are also connect to another which may define a bird, which is also connected to all the things related to birds.
All these connections becomes stronger always they become repeated, that is how the brain learn to recognize patterns and relate info.
In the case of neural networks, each virtual neuron/perceptron use math to calculate the level of success for each of its connections due a similar training which are some based in genetic algorithms.

 

5deepnetwork.png

Example Links:

Math, code and principles of NeuNets.
Similarities in signals between cat neurons and NeuNets (3min)
Learning to play Super Mario, perfect example of how NeuNets work and evolve
Car game, genetic neural.


Brain vs computers:

Spoiler

The brain has 85 billion neurons, each one may have as 1000 connections, if we compare with electronics which are in big disadvantage against the number of neurons they can emulate, but electronics has at least a 2 million times advantage in signal transmission speed and 1 million times advantage in signal repetition rate, which both are rising.
Furthermore, a computer can´t get tired of learn, it can absorb huge amount of data, and once it learns something it may copy the structure of its NeuNet so other computers does not need to.
This is the exact thing your cell phone does each time it use text recognition with the camera, it just use an already trained and simplify NeuNet to do the task.
Complex NeuNet requires a lot of computational power, this is due that they cannot know the true valley of a function, so they use the first minimum they find, and then go on with different tricks to solve the problem which takes time.

Just 6 years ago, some people specialize in this topic said that NeuNet was not promising by the amount of computer power it needs, now their  predictions cannot be more wrong.
People said that the Moore's Law is getting slow, which is a lie, architecture is changing; now supercomputers are based on GPUs which is great for NeuNet, they allow 20x to 50x speed up vs cpu .
It’s true that silicon is reaching its limit and normal cpu chips are not exactly doubling in power in the last years, but that is because a lot of research is going in different directions, as photonic computers, graphene and different 2d structures. But we also see huge leaps in different devices as the new NAND memories (which were the real bottleneck of today's computers).
USA wants to build new supercomputers, because it realizes that all problems can be solve with just that now.  For example wind tunnels are not longer needed.

But self machine learning will get a new meaning when quantum computers start to kicks in, because the process to find the true valleys is thousands of times faster.
D-wave is the only company who achieve to build a commercial quantum computer, Google, Nasa and Lockheed Martin are the first buyers, there was some  controversies about if it was a true quantum computer, now is prove that it is, but it lacks of some quantum properties that a full QC machine would  have.
These D-wave are very good for quantum annealing, finding the true max or min in a function using quantum entanglement, the QC has 3 main parts, one which is a binary interface, then the cooling system to reach 0.001 kelvin and an electromagnetic barrier to avoid interferences, and the third is the quantum chip all made on superconductors.
The first chip was 128 qubits, then it was upgraded to 512, now in 2015 they will changed to 1157 qubits, with a total of 2048 which remains off by  the moment.
The problem that the number of qubits is still low and the decoherence high; it can deal with errors, but increase the time to the calculation.
The new chip promises to reduce decoherence and other drawbacks, but even with all those problems, the QC with a 512q chip was used to train some NeuNets.

Links:  Dwave computer, Dwave chip, Extra info about QC

Specialist predict that in 2022 we would be able to build a true and powerful quantum computer, but is a waste of power run binary algorithms there, normal computers are faster and more cost efficient for that, so we might expect hybrid or interconnected computers using binary chips for binary algorithms and quantum chips for quantum algorithms.
The amount of power we can extract of them it will depend on the amount of new quantum algorithms we developed.
But one step further which does not need to figure out new quantum algorithms and logic would be improve the architecture of the NeuNet using quantum properties to represent NeuNet mechanics, like this:

Wave function --> Neuron/Perceptron
Superposition (coherence) --> Interconnections
Measurement (decoherence) --> evolution to attractor
Entanglement --> learning rule
Unitary transformations --> gain function

So instead making computers to simulate learning machines, we make a learning machine that removes all bottlenecks from the beginning.
This photonic reconfigurable chip may be a start in that direction.
If this is achieved, we would be talking of the ultimate learning machine, so fast that would not have comparison and with very low power consumption.
So the conclusion is that even binary computers can easily overcome the brain, and they will be able to do it at any task in the next two decades.

In the past it took us a lot of code engineering and hundreds of experts working by many years, just to try to make an algorithm to identify objects in a picture or a song in the radio.
They first started with 2 % of accuracy, then 5%,   8%....  many years later 25%,  the first year Deep Learning go out (a new NeuNet algorithm that  needs less human intervention and other characteristics) already achieve a 40% in a very short time without those hundred of engineers.
Now the % of efficiency in any of its task was increased considerably.
There are some small hardware chips which recreate the structure of an already trained NeuNet that can identify people, cars, and other objects in  a video surveillance camera only consuming few miliwatt of power, whereas for the same task with normal programs would consume a lot of power.
In 2011 IBM win the Jeopardy game using Watson, a supercomputer base in NeuNet, who was able to read Wikipedia and relate all its content, then it keep learning in other areas as Medicine, analytics, cooking, sport, advisor; helping to researches in a way that until now nobody could.

Watson Links:  Jeopardy, How it works?, as Advisor, Learning to see, General knowledge

We can feed these algorithms with raw data as pixels in the screen, without teaching rules or nothing; the computer will learn what to do by itself just looking the screen. In this case; learning to play video games.

There are two drawbacks with this technology.

1- Learns on its own, the acquired knowledge is not fully controlled by us.
2- We don’t really understand why it produces an outcome, because "it’s a complicated machine", then we cannot predict what it will do.

Google in recent times bought the company DeepMind, which it has as goal to create a true AI in where our friend Elon Musk also invested some money to ensure that the necessary security measures are taken, this also gave him the opportunity to keep an eye on the development of this technology.

Examples of today breakthrough with Deep Learning:

Spoiler

The things that deep learning can do now are almost unbelievable, for example, it can see a picture and create a text caption describing that picture, like: "two young girls playing with legos toys", which is in fact what the picture show.

Links: How it work, Paper and examples.

You may also ask a question about an object in a picture, in which will answer taking into account the whole context of the picture like:
"Where does the young boy secure himself?    Answer=Suitcase"

Link: Paper

Another way to measure Deep Learning power is watching movies and matching its content with their respective books, so for each frame of the movie it will find the correct section of the book where it describe that scene.
In order to achieve this it needs to understand semantics, the structure of language and the visual content.

Link: Paper

Then we have DeepStereo, which from few pictures in a salon or in the street (as google street view) it can create a full 3d animated environment and predict the view that you would see from different angles and distances.
We can find here similarities with our brain, if we remember a picture; we can create a full simulation in dreams of that environment, because our brain already knows how those objects look from different angles.

Link: DeepStereo video example.

Self Learning movement

 

I remember how they went mad trying to decipher the human walking and balance into code algorithms over years, now these NeuNets can do it in hours or minutes, and they can adapt to different circumstances.
Muscle dynamics, Learning to walk, Spider balance, Injured leg, Ping pong

Extra explanation and uses:

Video and predictions

Skype already has a spoken translator that works in almost all languages, the next step is to improve the translation and mimic the user voice in the translation... yeah, Uhura no longer has a job.

New mutable self learning virus and antivirus which are already in development.
By last, easy configurable NeuNets to design any utility we want even If we don’t know nothing about coding or NeuNets.
And the list goes on to almost all software and tasks.


The human conclusion

To see if these neural networks really show intelligence traits in their results and what it is needed to achieve consciousness, first we need to find a better definition of what we call intelligence and consciousness. 
Michio Kaku makes a good job answering this question in the first 10 min of this video from a physicist's point of view, I recommend.
Taking a look to deepmind papers and last breakthrough in neuroscience with the different brain mechanism, we are close to create an algorithm that would learn in a similar way as the brain; this does not mean that it needs to be equal, just needs to work.
We can make airplanes that fly well without the need to imitate all the complex movements from birds.
I realized that when looking for news in this field by selecting the option “this year” is not enough; you need to select months or even weeks given how fast it progresses.

So this take us to our final question:  -and then what?

Well, we will reach the time when our brain is not longer the limit to our technology, our slow way to learn, test and developing will be over.
Our technology at this point allow us to create a learning machine smart enough to improve its own design, this point in time is called “THE SINGULARITY”.

singularity.png

Even today, it is becoming more difficult to make predictions, but once we're in the singularity, all our predictions collapse, we can no longer see the future, neither (to a certain degree) the machine that is driven this.
At this point all resources are mostly focus to improve the power of this Hard IA, any other application of the new acquired technology will become in a waste of time and resources.
Why?  Because the knowledge will increase so fast, that any application that we might think of as useful, it will be outdated in few months by the new discoveries, a Hard IA does not need experimentation to prove new theories (which is something that consume us a lot of time), it can do it only by deduction.
We will reach a time when we (or it / Hard IA) will double all human knowledge progress in just 1 year, then we double again in 1 month, then again in just a week. There is not hard to imagine no matter how complex the universe, all possible questions will be answered in a very short amount of time after the singularity, this means jump from a limited knowledge to a godlike knowledge without middle app steps.

So, when will this happen?

They made this same question to many scientists and people working in the field in 2012; the average answer was by 2040, but many of those specialist was not even able to predict the grade of success that deep learning has today in just a period of three years.
Elon Musk said it may happen in 5 to 10 years. if I have to make a prediction, I would say that in 10 to 15 years, even 15 years looks like an eternity at this accelerating rate.

Homer-Simpson-the-End-is-Near.gifTime+cover%252C+2011.02.21+%2528Singular

We saw many signs like this in the past, but this certainly reflects the end of predictions, with respect to the Time magazine issue, that note was in fact about the singularity, it was released in 2011.
So this make us think; what about all our silly predictions about in how long mankind will begin to colonize other worlds?, or the technology needed for a Von Neumann probe?, how long until we reach another star?  what about our life plan of have grandchildren and die from old?,  Global warming really matter?
We were always so wrong to ignore this variable in all our predictions, but well, maybe now we are more prepared to explore and enjoy these last years of life as we know it.

Edited by AngelLestat
Forum migration 2015
Link to comment
Share on other sites

Another way it might turn out is how nuclear power turned out from the 1950 predictions to today. Yes this is in part political however the political part is purely an western issue not USSR/Russia or China.

Also Intel's cpu roadmap from 2005 or something where they assumed +10GHz cpu.

Fallout game series is based on the 1950 predictions of the future, steampunk on the 1880 predictions.

In short we do not know where the graph start flatten out. modern supercomputers is factory sized like the old intelligent computers only that it fill an building with standard high grade gpu and cpu.

Link to comment
Share on other sites

I just don't get what the driving force will be to make EVERONE willingly plug themselves into a giant computer network. Plus, the biggest factor the predictions forget about is money. Technology might be progressing fast, but the rate at which things become widely used and cheap is slower and steadier.

Link to comment
Share on other sites

The problem I always have with the singularity is the assumption that technology can be improved nearly indefinitely.

I've never really seen any justification for that assumption. For all we know we're very close to the practical limits of technology: a few more major breakthroughs and then stagnation until our extinction. Not because we stop trying, but because there is simply no way to improve technology within the current limitations of physics.

Link to comment
Share on other sites

The problem I always have with the singularity is the assumption that technology can be improved nearly indefinitely.

I've never really seen any justification for that assumption. For all we know we're very close to the practical limits of technology: a few more major breakthroughs and then stagnation until our extinction. Not because we stop trying, but because there is simply no way to improve technology within the current limitations of physics.

This. I remember reading an article about processor chip design that stated we're starting to run into physics as the barrier to our improvement of a CPU; you can't make the circuits any closer together because there's no way to stop the current from jumping, there's no way to mitigate the heat generated, etc.

I also saw an interesting perspective that we're ALREADY in the singularity, which started with the telegraph. Massive technological advancement and near-instant worldwide communication, everything since then has been refinement of the basic concept.

Link to comment
Share on other sites

Nope nope nope.

It's simple. Even assuming the NeoWhig theory of history that allows for this technology to be developed (which, oh boy, what a huge assumption that is!) all this will do is act like any other selection pressure ever faced by the human species.

To wit:

Assume the singularity is a thing that happens. There will be, broadly, two classes of people: those that care about the singularity, and those that don't.

Those that do won't breed. Those that don't will breed. Since human behaviour is susceptible to genetic influence, the number of people in new generations who will plug themselves in will approach zero. (If you don't believe me, check out the statistics on the rates of Amish defection.) Ergo, eventually you'll have a human population with no interest in the fact of the singularity going about their business.

The only way that this could be not the case is if the technology does not in fact cause a drop in birth rates. But of course, that's absurd, because said drop in birth-rates has already happened.

Link to comment
Share on other sites

For the Singularity to happen, technology would have to surpass human intelligence not only in terms of raw number-crunches per second, but also in terms of unmeasurables such as creativity. Keep in mind, a lot of human technological advances are the result of random brainstorms, hunches, educated guesses, accidents, and the occasional scientist doing math on his chalkboard and forgetting to carry a 1 somewhere.

Technology can certainly imitate such things--with a great deal of code and months or years of programming. But there are things your brain is currently doing, instinctively and automatically, which we simply have no idea how to program into a computer. Random reference to that Doctor Who episode where the Doctor shows two robots how to play rock-paper-scissors and the robots get themselves stuck in a perpetual loop...... :D

Link to comment
Share on other sites

But our brain is still the same, our intelligence did not develop for visualize and understand complex concepts beyond our everyday reality, due this, we depend a lot on the experimentation to move on.

Yes and no.

2 facts:

- First, the main part of our brains is made of "unspecialized" neurons: it's what we use for advanced reflexions, and the use and arrangement of the process made by theses neuronal web is dynamic.

- We, humans, have developped a symbiotic entity to the humanity: Culture. It the sum of experiences and knowledge of humanity, and today, no one single human can say that "Culture" have largely overmingled our single brain capacities.

- We already use and abuse of "brain extensions" for experimentations and storage of knowledges: all our data and computers are no more.

What can be a "singularity" is what I will call a "super-culture". Our symbiotic "culture" is already beyond any control from the humans. Nobody, even the more powerfull politicians can fully drive a culture: every humans take his knowledge from this symbiotic sapiens entity, and add a bit of something in here. But really, today, "cultures" are already driving most of our lives.

A "super-culture" will be a culture that drive itself, and most of the humanity behind.

For almost all humans, it will change nothing. They alway have lost the control of their destinity.

Edited by baggers
Link to comment
Share on other sites

There isn't a human or group of humans intelligent enough to create the singularity.

.

[Citation Needed]

Also, arguments about the limitations of computing power on advances in deep learning algorithms typically focus on supercomputers, but IBM's new SyNAPSE neurosynaptic chip is an example of an entirely new computer architecture that is specifically designed around neural networks. To be fair, IBM's new chip pales in comparison to even a mouse's brain, but it runs on a mere 70 mW. That's 70 milliwatts... Improvements are bound to come quickly. We could be, as AngelLestat points out, at the beginning of a new paradigm. We should be both excited and cautious.

Link to comment
Share on other sites

The problem I always have with the singularity is the assumption that technology can be improved nearly indefinitely.

I've never really seen any justification for that assumption. For all we know we're very close to the practical limits of technology: a few more major breakthroughs and then stagnation until our extinction. Not because we stop trying, but because there is simply no way to improve technology within the current limitations of physics.

This. Also:

Why? Because the knowledge will increase so fast, that any application that we might think of as useful, it will be outdated in few months by the new discoveries, a Hard IA does not need experimentation to prove new theories (which is something that consume us a lot of time), it can do it only by deduction.

We're already quite capable of doing this. Until deductions are confirmed by observation or experimentation though, they remain firmly in the realms of 'interesting hypothesis' or 'beautiful mathematics'. Having the deductions made by an AI rather than a human doesn't change that. Besides, in Huxley's words: "The greatest tragedy of science is the slaying of a beautiful hypothesis by ugly facts."

Link to comment
Share on other sites

If I need to make a prediction, I like to include as many variables and data as my brain allows me.

But there is always a particular variable which I choose to ignore, because if it is unleashed, it destroys any possibility of accuracy in the prediction.

First of all, let me say that I find the idea of predicting the future utterly ludicrous (and somewhat arrogant, however powerful your brain is). I believe that we might be able to determine crude trends over a foreseeable future (basically 10 years or so), but anything beyond that is too unpredictable. In the decades that preceded, nobody predicted the transformations that would happen with the Internet, WWI, WWII, the Great Depression or the end of the Cold War, yet those events had huge impacts on culture, society and technology.

We are totally incapable of comprehending the changes that might occur beyond 2020 or 2030.

The problem I always have with the singularity is the assumption that technology can be improved nearly indefinitely.

I see no reason to believe that a artificial self-consciousness is impossible with today's (or near future) technology. We know how neurons work at the biological level. We can observe how they are arranged and how they interact. We have the understanding to model and simulate neural networks.

There are physical limits to the size of microelectronics, but nothing stops you from just adding parallel processing as needed. We can already do impressive stuff within those limits. Scientists have already created working simulations of the brain of a fly or a rat. It's just a matter of time and throwing more computing power at it that we can model more complex neural systems.

Early models will probably not be "brain sized" and probably won't work in real-time, but there's no reason to believe that we can't have building-sized supercomputer that can simulate 10 minutes of human brain function over 10 days of calculation. That's a start.

Link to comment
Share on other sites

I see no reason to believe that a artificial self-consciousness is impossible with today's (or near future) technology. We know how neurons work at the biological level. We can observe how they are arranged and how they interact. We have the understanding to model and simulate neural networks.

There are physical limits to the size of microelectronics, but nothing stops you from just adding parallel processing as needed. We can already do impressive stuff within those limits. Scientists have already created working simulations of the brain of a fly or a rat. It's just a matter of time and throwing more computing power at it that we can model more complex neural systems.

Early models will probably not be "brain sized" and probably won't work in real-time, but there's no reason to believe that we can't have building-sized supercomputer that can simulate 10 minutes of human brain function over 10 days of calculation. That's a start.

I fully agree here. Human level AI is definitely possible (We're the living proof) and it's probably possible to engineer something smarter and faster than ourselves.

But the way I always understood the singularity is that this AI would proceed to self improve at an exponential rate until it is many orders of magnitude more intelligent than ourselves. This superintelligent AI would then proceed to create godlike technology that grant it (and hopefully us) immense power. This power is then used to convert all nearby matter into intelligence and infrastructure (depending on the person talking this 'nearby' is either the solar system or the entire universe).

It's this line of thinking that I disagree with. A superintelligent AI is still bound by the laws of physics. If those types of technology are impossible within physics then so is the singularity as envisioned by people like Ray Kurzweil etc.

Link to comment
Share on other sites

I do find it slightly amusing that one the one hand, the OP states that:

Even today, it is becoming more difficult to make predictions, but once we're in the singularity, all our predictions collapse, we can no longer see the future, neither (to a certain degree) the machine that is driven this.

And then proceeds to make a number of predictions about the singularity and how it will work out.

I'm also somewhat skeptical of the notion that we can create an AI that is capable of self-improvement to the extent required for a singularity. I can envisage a Nibb31 type future in which we've managed to make a building sized working simulation of a human brain and I can imagine that eventually we'll be able to fit that building sized computer into something the size of a human skull. What I'm much less convinced about, is whether that computer will ever be anything more than a copy of a human brain and so ever have more than human intelligence.

It might be that intelligence (however we define that) does turn out to be a simple function of processing power - i.e creating a superintelligence becomes a question of simply throwing more electronic neurons at the problem. But if not, we're going to need some fairly fundamental understanding as to how the huge collection of neural circuitry in the human brain actually combines to produce a human consciousness. And that, I suspect, is going to be a lot harder than 'merely' making an electronic copy of that brain.

Link to comment
Share on other sites

I fully agree here. Human level AI is definitely possible (We're the living proof) and it's probably possible to engineer something smarter and faster than ourselves.

But the way I always understood the singularity is that this AI would proceed to self improve at an exponential rate until it is many orders of magnitude more intelligent than ourselves. This superintelligent AI would then proceed to create godlike technology that grant it (and hopefully us) immense power. This power is then used to convert all nearby matter into intelligence and infrastructure (depending on the person talking this 'nearby' is either the solar system or the entire universe).

It's this line of thinking that I disagree with. A superintelligent AI is still bound by the laws of physics. If those types of technology are impossible within physics then so is the singularity as envisioned by people like Ray Kurzweil etc.

This, more realistic is the milestone singularities we get from time to time: fire, farming, writing, printing, steam, computers.

AI will be another like this.

- - - Updated - - -

I do find it slightly amusing that one the one hand, the OP states that:

And then proceeds to make a number of predictions about the singularity and how it will work out.

I'm also somewhat skeptical of the notion that we can create an AI that is capable of self-improvement to the extent required for a singularity. I can envisage a Nibb31 type future in which we've managed to make a building sized working simulation of a human brain and I can imagine that eventually we'll be able to fit that building sized computer into something the size of a human skull. What I'm much less convinced about, is whether that computer will ever be anything more than a copy of a human brain and so ever have more than human intelligence.

It might be that intelligence (however we define that) does turn out to be a simple function of processing power - i.e creating a superintelligence becomes a question of simply throwing more electronic neurons at the problem. But if not, we're going to need some fairly fundamental understanding as to how the huge collection of neural circuitry in the human brain actually combines to produce a human consciousness. And that, I suspect, is going to be a lot harder than 'merely' making an electronic copy of that brain.

Even an building size brain can be made smarter by using an larger building :)

You lose some because of signal delays but speed is not equal intelligence. Main benefit with computers is speed not intelligence and they are still useful.

An building sized human brain simulator would in itself be of limited use.

Link to comment
Share on other sites

An building sized human brain simulator would in itself be of limited use.

Why? For a virtual entity, physical footprint or location is irrelevant. "On the Internet, nobody knows you're a building-sized AI."

Link to comment
Share on other sites

And why are we still going on about building sized computers? I will quote my own post:

Also, arguments about the limitations of computing power on advances in deep learning algorithms typically focus on supercomputers, but IBM's new SyNAPSE neurosynaptic chip is an example of an entirely new computer architecture that is specifically designed around neural networks. To be fair, IBM's new chip pales in comparison to even a mouse's brain, but it runs on a mere 70 mW. That's 70 milliwatts... Improvements are bound to come quickly. We could be, as AngelLestat points out, at the beginning of a new paradigm. We should be both excited and cautious.

If you read a bit more about IBM's neurosynaptic chip, it currently only has 1 million neurons and 256 synapses per neuron (compared to tens or even hundreds of thousands of synapses per neuron for mammalian brains), but IBM's goal is to scale up their existing design to incorporate more neurons and synapses. And given that the current chip is the size of a postage stamp and runs on just 70 mW, I fail to understand where predictions of "building sized computers" is coming from?

Link to comment
Share on other sites

One thing i'm sure of, the way of making an super AI is not to mimicry the human brain with electronics. You can only create a super dumb AI this way cause the final product is not going to be much smarter then the model it originates from.

Link to comment
Share on other sites

And why are we still going on about building sized computers? I will quote my own post:

If you read a bit more about IBM's neurosynaptic chip, it currently only has 1 million neurons and 256 synapses per neuron (compared to tens or even hundreds of thousands of synapses per neuron for mammalian brains), but IBM's goal is to scale up their existing design to incorporate more neurons and synapses. And given that the current chip is the size of a postage stamp and runs on just 70 mW, I fail to understand where predictions of "building sized computers" is coming from?

Just going by the number of neurons, to simulate a human brain, you would need 90,000 of those chips (you'd only need 200 for a rat's brain). The chips might be small, but they need to be integrated on some sort of motherboard with RAM, power, interfaces, and other parallelization hardware... With all the associated hardware, they could probably squeeze a couple of them on a 1U 19 inch rack.

Surely it will be possible to densify the integration to a certain extent, but I'm pretty sure that the first of implementation of human-level AI that ever exists will be a massively parallel supercomputer in large research datacenter.

Edited by Nibb31
Link to comment
Share on other sites

Sigh. A lot of posts in this thread that were not thought through.

1. Nobody credible talks about the Singularity going on forever. A better model is a step function. It's a sudden, vertical increase in technology from (1) what we got to with human minds over a few thousand years (2) the limits of what can be developed using systematic experimentation and engineering.

By "systematic experimentation", I mean something like a laboratory the size of a continent, systematically trying each and every possibility for useful physical phenomena. Systematic engineering is starting with the problem description and designing solutions and measuring their efficiency, systematically investigating the most promising possibilities.

Why would it happen this way? Because biological brains are stupid, ludicrously so. Reasonable estimates for how much better you could do using existing silicon fabrication technology are speedups of thousands to millions of times. You only need a small speedup to boostrap/step intelligence forward to physical limits.

Example : google's neural net tools are so versatile they can help you write code and design better neural net tools. You use the tools to design an improved tool, which you use to...

You probably hit a wall eventually limited by the computer chips and memory the tools run on. But if the tools are now versatile enough that you can use them to help you design faster computer chips/denser memory, then...

Eventually your designs get so complex that human designers are just lost. But now the tools are helping to overcome that perhaps, translating hideously billion-transistor functional blocks to something that makes sense. And the tools are helping with verification and testing...

And so on, til eventually the human designers are no longer part of the loop as they are the bottleneck.

2. Nobody has a "choice" whether to participate or not. The beings who plug themselves into a global computer network, upgrade their minds, make themselves immortal - will be unstoppable. Adapt or die - either join them or you will be so obsolete that you may as well be dead.

- - - Updated - - -

One thing i'm sure of, the way of making an super AI is not to mimicry the human brain with electronics. You can only create a super dumb AI this way cause the final product is not going to be much smarter then the model it originates from.

Well, you either didn't think it through or you are not sure at all. You can get a million fold speedup. How much smarter would you be if you thought the same way you do right now, but had 1 million times the time to think stuff through? (and a virtual environment to "live" in where you have simulation tools and computers you can use that are as fast as you are)

Edited by SomeGuy12
Link to comment
Share on other sites

*snip*Well, you either didn't think it through or you are not sure at all. You can get a million fold speedup. *snip*

Well it looks you didn't think this through. What makes you think that we could ever achieve that kind of speedup? We are already reaching the physical limits of integrated circuits. Even if you could speed the process. Acceleration of dumb thoughts won't make the brain smarter only faster. You basically get the same dumb result again just faster. To create a super AI we need to be very creative by other means (completely new and revolutionary programming paradigm) and not by trying to simulate brains.

Link to comment
Share on other sites

Well, I read all your comments, everyone is free to make their own predictions and not share my view.

I just thought it was a very important topic that we should all pay more attention, because it may help us to take future decisions about our lives or what types of jobs will be paid more or which may be at risk. I think it's healthy to accept that the world is changing faster than years before.

I just wanted to summarize all the things I've been reading about it months in the best way I could find. Still, the post is very extensive, that is why I put some sections on quote in case someone choose to skip it, but if someone will comment about something, have the courtesy to be honest and mention how much do you read about the post and what sections you skip.

There are also tons of links that might take few hours watch them all. To those who are really interested in know more about this topic, I guess you have a lot of info to get started.

I just don't get what the driving force will be to make EVERONE willingly plug themselves into a giant computer network. Plus, the biggest factor the predictions forget about is money. Technology might be progressing fast, but the rate at which things become widely used and cheap is slower and steadier.
That is just a picture from the Time magazine that is not even mentioned in the magazine note.

You are mising the true point about the singularity.

The problem I always have with the singularity is the assumption that technology can be improved nearly indefinitely.

I've never really seen any justification for that assumption. For all we know we're very close to the practical limits of technology: a few more major breakthroughs and then stagnation until our extinction. Not because we stop trying, but because there is simply no way to improve technology within the current limitations of physics.

What limits of the technology? you mean silicon?

???

But you are true, it can not be improved indefinitely, the singularity has the possibility to increase the tech and knowledge so fast that you will know everything in a very short amount of time, no matter how complex the universe is. That is the same I mention in the OP.

But there is still a lot of margin to improve... quantum computers working with quantum size mechanism. Or better than a computer, a learning machine..

This. I remember reading an article about processor chip design that stated we're starting to run into physics as the barrier to our improvement of a CPU; you can't make the circuits any closer together because there's no way to stop the current from jumping, there's no way to mitigate the heat generated, etc.

I also saw an interesting perspective that we're ALREADY in the singularity, which started with the telegraph. Massive technological advancement and near-instant worldwide communication, everything since then has been refinement of the basic concept.

Quantum computers with learning machine architecture can improve speeds by (I dont really know the exact value, but just imagine something that you can not imagine)

The telegraph or computers, or communications has nothing to do with the singularity.

Read the post please. The singularity is bound to a single tech.. a learning machine. Why? Because is the only way that tech can escape to the limitations of our brain.

Is already explained in the OP, but read my next answers.

Nope nope nope.

Assume the singularity is a thing that happens. There will be, broadly, two classes of people: those that care about the singularity, and those that don't.

Those that do won't breed. Those that don't will breed. Since human behaviour is susceptible to genetic influence, the number of people in new generations who will plug themselves in will approach zero. (If you don't believe me, check out the statistics on the rates of Amish defection.) Ergo, eventually you'll have a human population with no interest in the fact of the singularity going about their business.

The only way that this could be not the case is if the technology does not in fact cause a drop in birth rates. But of course, that's absurd, because said drop in birth-rates has already happened.

Again, this has nothing to do with plug us.. This has to do with the learning machine we create, that is good enoght to improve its own design. That might be the same Hard IA or something that eventually will create a Hard IA.

Then this Hard IA will do wherever it choose to do, nobody can predict what it will do, or stop it.

If it choose that you should plug to something, you will wanted or not.. If it choose to kill us, it will.. If it choose to ignore us and leave, then we are still in the singularity edge, anyone can repeat the process until a new Hard IA will choose a different path for us.

"the singularity" ignores the concept of "diminishing returns"

There are many positive feedback loops all over the place, aside from black holes, none of them have lead to singularities.

?? I really will enjoy how you break your head to try explain that in a logic way :)

Second... what a black hole has to do with this?

Read the definition of singularity..

Also read the OP, I explain what singularity means in this case.

First of all, let me say that I find the idea of predicting the future utterly ludicrous (and somewhat arrogant, however powerful your brain is)

We are always predicting the future... some time only 1 min ahead, other times many years ahead.

Those with more info and "good at this" will do it better (yeah, there is not 0 or 1 here) than those who dont.

Call them arrogants points to a problem more related to the observer than the subject.

I believe that we might be able to determine crude trends over a foreseeable future (basically 10 years or so), but anything beyond that is too unpredictable. In the decades that preceded, nobody predicted the transformations that would happen with the Internet, WWI, WWII, the Great Depression or the end of the Cold War, yet those events had huge impacts on culture, society and technology.

We are totally incapable of comprehending the changes that might occur beyond 2020 or 2030.

One thing that we can predict, is that the singularity will happen, if is tomorrow or in the 2200 nobody know for sure.

Because is related to a learning machine.. their exist because our brain exist. Also we already made a lot of them that are helping us in a lot of areas.

Second fact is that once a learning machine leaves the limitations of our brain as:

learn stuff take us time (many years), and each time somebody born and die, it needs to start again.

Our communication is slow, our way to learn is slow too, we can only be focus in 1 task at the time, we get tied, we dint get more intelligent with each generation (in our time scale), plus millions of other limits we have.

So if a learning machine overseed all those limits and can improve it self at the same grade of its knowledge, then you have the singularity.. And that is a fact.

What happens after that.. nobody knows.. that is why is called the singularity.

I see no reason to believe that a artificial self-consciousness is impossible with today's (or near future) technology. We know how neurons work at the biological level. We can observe how they are arranged and how they interact. We have the understanding to model and simulate neural networks.

There are physical limits to the size of microelectronics, but nothing stops you from just adding parallel processing as needed. We can already do impressive stuff within those limits. Scientists have already created working simulations of the brain of a fly or a rat. It's just a matter of time and throwing more computing power at it that we can model more complex neural systems.

Early models will probably not be "brain sized" and probably won't work in real-time, but there's no reason to believe that we can't have building-sized supercomputer that can simulate 10 minutes of human brain function over 10 days of calculation. That's a start.

Read Brain vs Cpu section.

Also I explain that you dont need to exactly copy the brain to have an effective learning machine.

We invent the wheel which is a very efficient mechanism to move things around vs all different mechanism that the evolution produce in billions of years.

Simulate the brain is the worst path that you may take to accomplish this.

Is good to find inspiration of how the brain solve things, but it does not mean is the only way.

Also simulate neural networks is not the best approach, I already mention how you can increase many orders of magnitud with architectures base on neural networks and even more if is base on quantum mechanics properties.

I do find it slightly amusing that one the one hand, the OP states that:

And then proceeds to make a number of predictions about the singularity and how it will work out.

I'm also somewhat skeptical of the notion that we can create an AI that is capable of self-improvement to the extent required for a singularity. I can envisage a Nibb31 type future in which we've managed to make a building sized working simulation of a human brain and I can imagine that eventually we'll be able to fit that building sized computer into something the size of a human skull. What I'm much less convinced about, is whether that computer will ever be anything more than a copy of a human brain and so ever have more than human intelligence.

It might be that intelligence (however we define that) does turn out to be a simple function of processing power - i.e creating a superintelligence becomes a question of simply throwing more electronic neurons at the problem. But if not, we're going to need some fairly fundamental understanding as to how the huge collection of neural circuitry in the human brain actually combines to produce a human consciousness. And that, I suspect, is going to be a lot harder than 'merely' making an electronic copy of that brain.

Read the other answers, also the OP.

One thing i'm sure of, the way of making an super AI is not to mimicry the human brain with electronics. You can only create a super dumb AI this way cause the final product is not going to be much smarter then the model it originates from.

ok, you got it.

- - - Updated - - -

Sigh. A lot of posts in this thread that were not thought through.

1. Nobody credible talks about the Singularity going on forever. A better model is a step function. It's a sudden, vertical increase in technology from (1) what we got to with human minds over a few thousand years (2) the limits of what can be developed using systematic experimentation and engineering.

You seems to understand the basic idea and you did some good explanations.

But I dont understand your main point about step function and your definition of the singularity..

A tech singularity is not an effect that go on forever...

Try to read the OP post "the human conclusion" section.

Link to comment
Share on other sites

Really an AI in its own right is somewhat unnecessary (though a huge boon) to any possible singularity. The point has always been that the tools we build today make tomorrows work easier than yesterdays tools. Note, a tool is not JUST hardware. A CPU just sitting there is not a useful tool. A CPU running a program to solve a problem is a useful tool. A CPU running a better program to solve the same problem is a more useful tool. Further note, the usefulness of a tool is not just directly related to the existence of the tool, but in how many of it are being used. A simple example is a PS4, a powerful computer in its own right, but in the 4,000+ Air Force cluster, a force to be reckoned with indeed. A more pertinent example is how we are in this fascinating state where items like 3D printers and the "Maker/Hacker" ideology is suddenly acting like a force multiplier when it comes to inventiveness.

Sort of a mathematical example. You have 1M engineers, of which 10% are interested in inventing things, and of which 2% actually have the resources to actually create the item they have thought up. That is 2,000 active inventors and 98,000 that just shrug and wistfully imagine the glory of their idea without going anywhere. Suddenly, someone invents a 3D printer that those engineers can buy. You get an extra 2% that suddenly have the resources to explore their idea. You are now at 4,000 inventors. But perhaps now also you get an extra 1% from the original 1M that explore this new interesting curiosity. So the new math says 4,400 inventors. Another person invents a new coding system that makes it easier and more intuitive to program robots or other automated systems, apply the 2% and 1% again. You now have 7,200 inventors. Plus, now that you have these people playing around with all of these things helping them improve, and now they are quite cheap to utilize then for a variety of reasons (inspired youth, non-engineers poking their heads into certain aspects of the field, etc) we can suddenly add 1% more to our original million engineers meaning that after these two little steps we now have 7272 inventors. All of these things feed back into themselves, and sure, you DO have an upper boundary limit that is just equal to your population total but even once you reach the saturation point there is a LOT of inventing going on. Invention isn't even just making the tech in columns A or B better, it is also creating column C by realizing you can combine A/B. The more people creating new things, the more chances there are to combine things. Imagine in a way that inventing humans are neurons and the tools that they create/use are the connections. The more of both we have the more "intelligent" we are. And the greatest thing about easier and more intuitive, yet powerful, tools is that they drop the bar of entry to their use drastically. Perhaps an example. Let's say you have a teenage girl, (apologies about this specific example women of KSP) and she is that stereotypical doesn't care about school, just wants to have fun, type teen. Very into the makeup and such. Well, as a gift her father buys her a device (that is getting a bunch of VC money incidentally, it's a real thing) where she can select any colored pixel on her computer screen and print of any makeup type (lipstick, blush, etc) in that color. Let's even say it has the option to let the user play with the ingredient mixtures so they can make a given item more smooth, textured, etc. This girl that has no interest in engineering suddenly has this simple to use tool that lets her craft colors of makeup that were never able to be bought previously, and perhaps after hearing from one of her friends that adjusting the balance of chemicals gave them a desired look, this teen starts adjusting the balances on hers and spreads that info around. You now have a non-engineer engaging in invention without even properly realizing that she is adding to the cumulative knowledge of humanity. THIS is the power and driving force of the singularity, when humanities ability to actually use our capabilities becomes so trivial and widespread that simply the act of going about your day ends up furthering us all even in some small part.

And we are at the point where we are finally starting to ramp up that tool curve for the last part.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...