Jump to content

Everything wrong with our predictions (The Singularity is coming)


AngelLestat

Recommended Posts

Gentlemen, I and I am sure many others would like the Science labs to be a place of debate on topics from all the sciences, even speculative technology, instead of a forum of dismissive argument and "not invented hear" syndrome.

So I will ask you all to stay on topic and consider AngelLestat's premise, and not to be upset when ones own views are challenged.

Link to comment
Share on other sites

Ok, so you propose a subject and then you shoot down everybody's opinion, except yours. We've seen this before.

So what is your point exactly?

Fraid I'm with Nibb31 on this one. I'm seeing a lot of hand-waving followed by more hand-waving seasoned with a dash of 'quantum' (because everything works better with quantum, amirite?) to rebut any disagreements.

Less snarkily, I'm seeing a lot of interesting things that neural net architectures can do but they all seem to be single tasks, such as voice recognition, pattern matching, machine vision and the like. I'm not seeing anything that even speculatively gets us from there to a general purpose learning machine, let alone one capable of bootstrapping itself to a higher level of intelligence, thus kicking off the singularity.

Edit: OK, maybe the kerbal approach of 'make it bigger' i.e. throwing more hardware at the problem, might work (heavy emphasis on the might). In which case we shouldn't need to wait for a bootstrapping AI to make itself super intelligent, since we already know that throwing more hardware at it will work just fine. Which rather defeats the notion of a singularity.

I confess that I didn't watch the Watson videos - perhaps I should. On the other hand, I would surely appreciate a link or two to a simple written article rather than yet another video. Us antiquated, soon to be obsolete whether we like it or not, biological types can be funny like that sometimes.

Edited by KSK
Link to comment
Share on other sites

Ok, so you propose a subject and then you shoot down everybody's opinion, except yours. We've seen this before.

So what is your point exactly?

My point about what?

Is the same thing I said, I can understand that someone might thing we are very far away from the singularity.. is ok.

But we were always wrong about the mechanism of how progreess increase, is not linear. There is not flying cars followed of colonies in other planets, travel to the stars and becoming a parady of star treek with different civilizations.

That never will happen, once the singularity starts, there are not middle steps. Which might be other way to explain the Fermi paradox.

About shoot down everybody´s opinion not sure what are you talking about... You can said that my view is wrong by X reasons but I can not answer and explain why I disagree?

Next time, make your opinion and clarify that you dont want an answer. :D

And we are at the point where we are finally starting to ramp up that tool curve for the last part.

Yeah I am agree with the thing you said, in the Op I explain that our recent tech give us a more exponential progress rate, but we are still bound to the limits of our brain.

Each time takes more years learn and be specialize in something, so the true point of inflection comes with the tool making other tools "without" human supervision.

Some people call the singularity to any point in an exponential curve without knowing its true trend. But that is not the definition of singularity that I am using.. which may bring confusion.

Fraid I'm with Nibb31 on this one. I'm seeing a lot of hand-waving followed by more hand-waving seasoned with a dash of 'quantum' (because everything works better with quantum, amirite?) to rebut any disagreements.

We already have some kind of quantum computers.. the theory is the most solid theory we have, more than relativity and thermodynamics.

And people was mention the "silicon limit" as an universal limit.

Less snarkily, I'm seeing a lot of interesting things that neural net architectures can do but they all seem to be single tasks, such as voice recognition, pattern matching, machine vision and the like. I'm not seeing anything that even speculatively gets us from there to a general purpose learning machine, let alone one capable of bootstrapping itself to a higher level of intelligence, thus kicking off the singularity.

You can combine different NeuNets, lets said that you have a robot, a neunets already learn how to move around using those robots (arms and legs), then you have a NeuNets that identify objects that it see with the camera, then other NeuNets to audition, other to understand language and structure, once those NeuNets already have learn, then you can run them without a big processor, or you can design the hardware base in those already trained NeuNets.

You may need a new small NeuNet that linkds the responces of other NeuNets on visuals objects with words and sounds + movement actions.

You can copy that to other robots, so they already know. Of course this is not yet enoght to bring awareness, but it give you an idea of the huge game change this technology means.

There are more videos on the "Neural Networks spoiler" section. The fact that these NeuNets can be combine by genetics algoritms selecting those who get the best result, bring us another clue of how they may improve it self.

Edit: OK, maybe the kerbal approach of 'make it bigger' i.e. throwing more hardware at the problem, might work (heavy emphasis on the might). In which case we shouldn't need to wait for a bootstrapping AI to make itself super intelligent, since we already know that throwing more hardware at it will work just fine. Which rather defeats the notion of a singularity.

Yeah, that is one of the things that I have doubt, it should be true as you mention. Maybe we dont need an IA with conscience, but not sure.

I confess that I didn't watch the Watson videos - perhaps I should. On the other hand, I would surely appreciate a link or two to a simple written article rather than yet another video. Us antiquated, soon to be obsolete whether we like it or not, biological types can be funny like that sometimes.

Watson is a good example of what it can do a cognitive machine. But I guess the best examples of NeuNets are these:

In this second link, they introduce as imput values to the NetNet all pixels of the game, then without rules or nothing, the NeuNets should learn to play just looking the screen.

This means many thing, it learn to understand the shape and form of its enemies and what shape controls, it learns to predict movement and other things.

So a NeuNet can be used for many task, also one single neunet can be used to understand sounds, visuals, movements, all in one, the structure doe snot change. The principle of self learning is simple, is just relate impulse and connections and create patterns, then reward those pattern or result who gives more accurate results.

Link to comment
Share on other sites

Well it looks you didn't think this through. What makes you think that we could ever achieve that kind of speedup? We are already reaching the physical limits of integrated circuits. Even if you could speed the process. Acceleration of dumb thoughts won't make the brain smarter only faster. You basically get the same dumb result again just faster. To create a super AI we need to be very creative by other means (completely new and revolutionary programming paradigm) and not by trying to simulate brains.

No, you didn't do the most cursory of research or napkin math. (yes, this is a childish tit for tat). The reason I say that is simple - open a neuroscience book and find out what happens when an action potential hits a synapse. Think about it for a little bit. TLDR, a 1-clock solution to mimic that exists by pre-calculating the threshold and arming drive gates with an enable signal to the inputs to a neuron that will push it over the threshold. So in one clock, incoming signal arrives, and drive gates set the flip flops for the next synapse over.

TLDR, what I'm describing is using circuits that are the same speed as current silicon, but you have a massive, massive pile of them. They are either stacked on top of each other in 3d to make gigantic multi-layer cubes or there's a lot of them filling a building. The limiting factor becomes the speed of light between different simulated neurons.

Do your own research, you won't believe my findings. What is the longest possible wire run inside the brain? What is the propagation speed of the fibers crossing the corpus callusum?

Google for "hollow core optical fiber". You'll discover that if the ultimate bottleneck is the speed of light, you could build a system that is a million fold or greater faster. It ultimately depends on how dense the resulting cube of brain-emulating circuitry is. A 1-meter cube would be a lot easier to cool and power the circuits than a 10 centimeter cube.

In the much nearer term, before we talk about theoretical limits, you could much more easily get 10,000 times speedup. You'd need a machine the size of a building and the hollow core optical fiber runs could stretch to kilometers. This is something present day humanity could build if the budget were there and we knew the pattern for the neural circuits. (by, say, scanning a brain)

I'm not saying you couldn't do even better by taking that same pile of brain emulating circuitry, and building a machine using the same technology that is organized in some far more efficient, non-human way. I'm disagreeing with your premise that we have to do that, or that it would be pointless to emulate human minds. We might not be able to build a sentient machine using full synthetic algorithms if the people designing that machine are mere humans, limited by human scale thought speeds and memory. A sentient machine is thousands of separate subystems, all of them capable of learning and evolving, all of them capable of affecting each other. How would you get a system that complex to even work without just crashing or freezing up if it's impossible to test or debug systems that constantly change their behavior based on learning input? What if, say, 1 million of the world's best people can't make it work? (for one thing, there's a logarithmic relationship between throwing people at a problem and results. the more people you add the less efficient each marginal person becomes, and there may even be a point that adding another person to a design team reduces performance. )

Copying a known good design from the human mind sounds to me like a lower risk project. Once you have it working and running at higher speeds, you have your new super-intelligent best friends work out a way to build full synthetic AIs. I think a human being who thinks 10k+ times faster and has instant mental access to tools like databases of knowledge, calculation routines, and so forth would be a de facto superintelligence. Even if you disagree that 10,000 times speedup is possible, do you concede that if you could build such a machine it would be super-intelligent?

One final assumptions check : you do know I'm talking about using current large scale silicon fabrication techniques to make custom hardware circuits to duplicate brain function. I'm not talking about a processor, I'm talking about a design where every synapse gets dedicated hardware to calculate the voltages and threshold at that synapse. You cannot meaningful compare what I am describing to, say, performance charts of calculations per second for Intel processors over time. Processors solve problems that are serial - step N2 depends on the answer from step N1. A brain simulation is inherently parallel - each separate synapse calculates it's voltage separate from all of the others.

Our limits for silicon mean that we cannot get the circuits to solve serial problems any faster at the present time, faster switching rates make too much heat and the propagation delays are too long.

Edited by SomeGuy12
Link to comment
Share on other sites

And why are we still going on about building sized computers? I will quote my own post:

If you read a bit more about IBM's neurosynaptic chip, it currently only has 1 million neurons and 256 synapses per neuron (compared to tens or even hundreds of thousands of synapses per neuron for mammalian brains), but IBM's goal is to scale up their existing design to incorporate more neurons and synapses. And given that the current chip is the size of a postage stamp and runs on just 70 mW, I fail to understand where predictions of "building sized computers" is coming from?

Just going by the number of neurons, to simulate a human brain, you would need 90,000 of those chips (you'd only need 200 for a rat's brain). The chips might be small, but they need to be integrated on some sort of motherboard with RAM, power, interfaces, and other parallelization hardware... With all the associated hardware, they could probably squeeze a couple of them on a 1U 19 inch rack.

Surely it will be possible to densify the integration to a certain extent, but I'm pretty sure that the first of implementation of human-level AI that ever exists will be a massively parallel supercomputer in large research datacenter.

Far worse abut the synapses. The design require an high interconnect all over, number of synapses you can connect between chips will be limited, if you increase number of chips you will reduce the bandwidth between any two of the chips.

Even if you use all the tricks in the books, <10nm process, 3d design on chip and stacked chips you are unlikely to match an rat on an single chip using known technology. transistor count is already 5.4 billion.

Note that increasing the performance of one chip give you an smarter animal brain it don't help on the interconnect issue who will have to scale with both the number of neurons on chip and number of chips used.

Not saying that an array of thousands or rat brains who are connected together would not be useful however it would not be in any way similar to an human even if having more neurons.

- - - Updated - - -

Why? For a virtual entity, physical footprint or location is irrelevant. "On the Internet, nobody knows you're a building-sized AI."

You would however be more expensive than the human who can take your job :)

Yes an AI would have an benefit in being very well connected to traditional computers and networks far better than we could ever bee even with an direct brain to computer interface.

Link to comment
Share on other sites

Not saying that an array of thousands or rat brains who are connected together would not be useful however it would not be in any way similar to an human even if having more neurons.

Maybe, but maybe not... Is there anyone here who can speak (with some authority) to how much of our brain is used for actually thinking and for creative processes? We've got billions and billions of neurons with tens or even hundreds of thousands of synapses each, but what percentage of those are used for the type of pattern matching that neural nets are already getting good at (like vision systems, voice recognition, etc)? How much of it just controls our bodies systems? How much of it is just used for what classic computer systems are good at - memory? Do we need to be able to simulate the whole blob of grey matter to yeild any sort of intelligence? Or can we get away with building a machine that is only comparable to the "thinking" part of our brains? I can't help but wonder.

And to be clear, I have mixed feelings about AI. I am fascinated by some of the recent progress that has been made with neural networks and deep learning algorithms, but I have always been sceptical. I couldn't get through Ray Kurzweil's book "The Singularity is Near". He struck me as a nut... I guess only time will tell where future advances take us and how soon we manage to develop true AI, but I am increasingly convinced that it will come sooner than even my sceptical self thought possible only a few years ago.

We seem to be at the beginning of a new paradigm, where we have stoped trying to put the square peg of intelligence into the round hole of Von Neumann architecture machines and are focusing instead on trying to make it fit into an architecture that more closely models biological computers. So far the results have been impressive and we should be prepared, with our policies, ethics and security, for unexpected breakthroughs.

Link to comment
Share on other sites

I can, sorta. I have a degree in a related field and took a medical school neuroscience course. As far as I know, at all times, all synapses everywhere, the cells are active and slowly updating their states. However, the truly "active" systems - neurons actually firing off action potentials - are only about 10% of the brain at the same time. With that said, if you were building an artificial equivalent, the only correct emulation requires you to mathematically calculate state updates to every neuron in the machine. The more quiescent ones you could state update less often.

This is why I think that designing massive multi-layer custom chips with dedicated circuitry for each and every neuron - IBM's approach - is the only practical way. CPU/GPUs are too inefficient and not meant for this.

Link to comment
Share on other sites

I believe that the singularity as the OP has called it does not precisely exist. Progress is continuing at an ever faster rate, yes, but when has it not? What will make the appearance of computers more powerful than the human brain, and then programmed to do something we consider intelligent change everything more than say the invention of electricity, or nuclear energy? I am not attempting to say the invention of electricity or nuclear energy were not world changing, but what would make the invention of bigger computers and an AI that does something smart most of the time bigger than say the invention of electricity?

To put it very simply, society has always been impossible to predict fully, look at the 1950's, look at their vision of today. Even those with significant knowledge in relevant fields could not imagine the world of today unless by chance. Where is my nuclear powered rocket?

To put it very simply we have always been advancing exponentially and this is nothing new.

Now, about those non stupid AI. Programs do what you tell them to do. Always. That is why we have bugs (most of the time, could always be hardware failure, but that is rare) and those bugs are normal. The human brain is a large electrochemical computer, one with exceptional speed, nearly bug free programming, and billions of computer years proving the code functional. Now you want to build an AI, mostly from scratch to do what the human brain does? Good luck, even if it is self learning you still have problems, the biggest one is the slow speed of learning and the fun when your self learning program learns incorrectly, resulting in fun bugs which will be the slow death of your program.

Now if you make an AI, will it act like a human? Eh, not really. Our activity has evolved out of a need to reproduce, eat, survive, etc. The goal of an AI does not need to include survival, reproduction, or eating. An AI does only what it is programed to do. So an AI would do exactly what it was designed to do. If that is design the best reactor, it will do so even if it means that it will stop existing. An AI would not have a human like morality.

I believe that we will probably never see an AI that can be described as human like unless we attempt to make that one.

Link to comment
Share on other sites

I can, sorta. I have a degree in a related field and took a medical school neuroscience course. As far as I know, at all times, all synapses everywhere, the cells are active and slowly updating their states. However, the truly "active" systems - neurons actually firing off action potentials - are only about 10% of the brain at the same time. With that said, if you were building an artificial equivalent, the only correct emulation requires you to mathematically calculate state updates to every neuron in the machine. The more quiescent ones you could state update less often.

This is why I think that designing massive multi-layer custom chips with dedicated circuitry for each and every neuron - IBM's approach - is the only practical way. CPU/GPUs are too inefficient and not meant for this.

Well the ibm chip probably uses less than 10% of its transistors at any time too, that is the reason for its low power use. Normal cpu don't use all transistors at once either but more.

And yes this is an special chip, guess it work more like an gpu than an cpu but its designed to emulate neurons so it might have an module for each neuron but most capacity goes to handle the interconnects.

Image recognition is an basic brain function even far more primitive brains like this chip has it, same goes for learning.

And even if we don't manage human level brains it would be extremely useful for loads of stuff, more so as the brain could simply load new functions on demand,

Link to comment
Share on other sites

I can, sorta. I have a degree in a related field and took a medical school neuroscience course. As far as I know, at all times, all synapses everywhere, the cells are active and slowly updating their states. However, the truly "active" systems - neurons actually firing off action potentials - are only about 10% of the brain at the same time. With that said, if you were building an artificial equivalent, the only correct emulation requires you to mathematically calculate state updates to every neuron in the machine. The more quiescent ones you could state update less often.

This is why I think that designing massive multi-layer custom chips with dedicated circuitry for each and every neuron - IBM's approach - is the only practical way. CPU/GPUs are too inefficient and not meant for this.

Gpu are 20 to 50 times more faster than CPU at these task, so I guess is not quite fair put them in the same level.

Also the full brain simulation is the worst path to achieve a true AI.

That is the path than the europe union is taken.

https://www.humanbrainproject.eu/

But in their defense, they do it just to understand how the human brain works.

MOstly all neurons of our brain are used to control and diagnose each internal organ we have, then we have the skin (billions of connections), then extra millions for each one of the senses, then those for memory (which are in part the same neurons), learning, actions, movements, ADN instinct, etc. The remaning of all that may be consciousness..

The only thing we know, is that we can create an IA base on NeuNets to play mario bross with only 20 neurons, how many we need to do the same with our brains?

I believe that the singularity as the OP has called it does not precisely exist.

As I ask in the second page... if someone dint read the OP or it jump some articles, then before post dont forget to mention that.

You dint only not read the OP, you are also assuming and answering something that is not there.

Sorry If I am wrong, but that is the full impression that I have after read your post.

A very common mistake, is to read the title, then remember a posture you have over a similar article readed years back, and assume its the same thing.

Progress is continuing at an ever faster rate, yes, but when has it not? What will make the appearance of computers more powerful than the human brain, and then programmed to do something we consider intelligent change everything more than say the invention of electricity, or nuclear energy? I am not attempting to say the invention of electricity or nuclear energy were not world changing, but what would make the invention of bigger computers and an AI that does something smart most of the time bigger than say the invention of electricity?

This is not about a point in a progress curve, that is not the definition of singularity I am using (that is a old definition which does not any sense), I define the singularity as the moment in time when an AI reach the capacity to improve it self escaping from the limit of the brain.

To put it very simply, society has always been impossible to predict fully, look at the 1950's, look at their vision of today. Even those with significant knowledge in relevant fields could not imagine the world of today unless by chance. Where is my nuclear powered rocket?

To put it very simply we have always been advancing exponentially and this is nothing new.

We always was the ones who took the desicions, if we achieve to build an Hard IA, then its intelligence can be so far from ours, that we would not be the ones who took the desicions any more.. So after that point, is impossible to predict what would happen.

Also the accelerating rate of the tech explosion makes impossible to continue with normal way of living or business.

Why you would invest in a new clean way to get energy using wind or fusion, if it will take you 5 to 20 years of development to start, when the knowledge is increasing by huge steps year by year, you dont need to be a genius to know that it will be a very bad investment.

Now, about those non stupid AI. Programs do what you tell them to do. Always. That is why we have bugs (most of the time, could always be hardware failure, but that is rare) and those bugs are normal. The human brain is a large electrochemical computer, one with exceptional speed, nearly bug free programming, and billions of computer years proving the code functional. Now you want to build an AI, mostly from scratch to do what the human brain does? Good luck, even if it is self learning you still have problems, the biggest one is the slow speed of learning and the fun when your self learning program learns incorrectly, resulting in fun bugs which will be the slow death of your program.

An AI does only what it is programed to do. So an AI would do exactly what it was designed to do. If that is design the best reactor, it will do so even if it means that it will stop existing. An AI would not have a human like morality.

The NeuNets structures to achieve an aware AI, would not be bounce by a rigid structure, because the most easy way to achieve that (from my opinion) would be let the machine to make its own structure inside the neural network.

Similar as our brain does it, but without the need of billions sensors as our body has.

I believe that we will probably never see an AI that can be described as human like unless we attempt to make that one.

I am agree.

Edited by AngelLestat
Link to comment
Share on other sites

This is not about a point in a progress curve, that is not the definition of singularity I am using (that is a old definition which does not any sense), I define the singularity as the moment in time when an AI reach the capacity to improve it self escaping from the limit of the brain.

Oops, sorry about that.

We always was the ones who took the desicions, if we achieve to build an Hard IA, then its intelligence can be so far from ours, that we would not be the ones who took the desicions any more.. So after that point, is impossible to predict what would happen.

Also the accelerating rate of the tech explosion makes impossible to continue with normal way of living or business.

Why you would invest in a new clean way to get energy using wind or fusion, if it will take you 5 to 20 years of development to start, when the knowledge is increasing by huge steps year by year, you dont need to be a genius to know that it will be a very bad investment.

1. Why does the program have to make decisions for us again? Why not instead use a neural net as we already do for certain things, to design individual things for us and under our directions. Or even better yet, with the progress on brain computer interfaces which would come with increased understanding of the brain, connect the computers to our brains?

2. To put it simply it has always been impossible to predict what will happen, progress has a way of having far too many unintended consequences, not all of them bad, to predict. Nuclear weapons leading to better medicine is the prime example I can think of.

3.You would invest in fission or any other long term project because it is your only option and while progress is continuous it needs to be sustained by something. In other words, and on the most basic level, if you do not build a fission reactor now, then how will you power the city tomorrow? Buying something today instead of tomorrow has always meant some inefficiency, but it still happens.

The NeuralNets structures to achieve an aware AI, would not be bounce by a rigid structure, because the most easy way to achieve that (from my opinion) would be let the machine to make its own structure inside the neural network.

Similar as our brain does it, but without the need of billions sensors as our body has.

Our brain also has some impressive hardware that, quite honestly, has taken millions of years to complete. To complete that in a computer without just copying it somehow would be simply absurd.

____________

Creating a computer program that could actually act remotely like a human brain, even with computers far more powerful than a human brain, would be quite the acomplishment. And that is assuming such computers are right around the corner. It has been said that "The best-laid schemes o' mice an' men Gang aft agley, An' lea'e us nought but grief an' pain, For promis'd joy!". I believe that this applies to this situation. We are attempting to predict something far in the future that relies yet to be discovered principals to act as planned and more importantly people to act as planned. People never act as planned and quite often the universe does not either. We are nearing the point at which transistors cannot be made smaller, quantum computers look promising, but have problems, and chemical computing is a long way off and would not be able to make something much more powerful than the human brain. It would be quite presumptuous to assume that we have this in the bag,

Link to comment
Share on other sites

Oops, sorry about that.

1. Why does the program have to make decisions for us again? Why not instead use a neural net as we already do for certain things, to design individual things for us and under our directions. Or even better yet, with the progress on brain computer interfaces which would come with increased understanding of the brain, connect the computers to our brains?

Neural networks as the ones that we actually use are the first step, but you can not stop progress, someone always will try to find a way to achieve conscience. When that happens, an AI will make its own goals.

You may said.. is silly, why someone would want to do that? The answer is simple.. just because we can.

But if that answer is not enoght.. neural networks will have still a limit, which is what products we want and what is the best to us. If we use our simple brain to try to answer those questions, we cant.. we dont really know what is the best for us, so we can create a super intelligence to tell us that.. of course it may be a very bad idea..

2. To put it simply it has always been impossible to predict what will happen, progress has a way of having far too many unintended consequences, not all of them bad, to predict. Nuclear weapons leading to better medicine is the prime example I can think of.

40000 years ago, it would be easy to predict the progress changes after 5000 years, the same than a lapse of time of 500 years 5000 years ago, but in 1900 that frame is reduce to 50 years, then in 1950 was close to 25 years, now we reach a point when we can not predict more than 10 years, Closer we are from the singularity, the harder it is to make accurate predictions. So that is when a Hard AI is created, our chance to make accurate predictions will be reduce to 1 year or months... and days with the time.

3.You would invest in fission or any other long term project because it is your only option and while progress is continuous it needs to be sustained by something. In other words, and on the most basic level, if you do not build a fission reactor now, then how will you power the city tomorrow? Buying something today instead of tomorrow has always meant some inefficiency, but it still happens.

The only thing that would have logic to invest, are the things that will increase the power of the AI, because if this AI can discover how to use antimatter in a very simple way (just 5 years after it begans), then your investments are a waste of time and resources.

All investments are based on risk.. if the risk of failure is high then nobody will risk to put a lot of effort and resources in that. (your product will not see the light because it will be outdated by other products, or it will not be needed due other discoveries of how to dodge that particular problem in other way)

Our brain also has some impressive hardware that, quite honestly, has taken millions of years to complete. To complete that in a computer without just copying it somehow would be simply absurd.

As I explain in the OP, we invent the wheel which does not have nothing to do with the animal world.

Is a very simple idea that works better (in the average cases) than legs or other complex movement mechanism.

The body and the brain are not designed from zero, they follow an evolution path.

So at begining it may be a good idea to have some sensors connected directly to the muscles, and then evolution found it may be a good idea put some neurons between to interpret better those signals and take an action suited to the problem.

But if there is a better way to do it, you can not erase all your progress and start again, you need to work with the things you have. That is evolution.

Now think in the nerons connection to other neurons, mostly all connections will be between close neurons, a neuron will not create a new connection to a very distant neuron, that is something that electronics can do without problem. So all the neurons in the path to another key neuron will not add any processing utility.

Long connections can be only produce by genetic information.

So due that, many of our neurons and connections may be not adding much to our process power.

Edited by AngelLestat
english errors
Link to comment
Share on other sites

40000 years ago, it would be easy to predict the progress changes after 5000 years, the same than a lapse of time of 500 years 5000 years ago, but in 1900 that frame is reduce to 50 years, then in 1950 was close to 25 years, now we reach a point when we can not predict more than 10 years, Closer we are from the singularity, the harder it is to make accurate predictions. So that is when a Hard AI is created, our chance to make accurate predictions will be reduce to 1 year or months... and days with the time.

On the other hand the rate of change might have slowed down since 1960.

The society changes from 1905 to 1960 was much larger than from 1960 to 2015 and far more fundamental discoveries.

Improvements on the stuff we discovered then and the IT revolution might blind us a bit here.

You have two effects, improvements and cost increases, at some time the cost increases faster than the improvements and progress in the field slow to an crawl, new fundamental discoveries might relive the field again.

Yes we have new stuff in the pipeline, GM, AI and metamaterials are some of them.

The only thing that would have logic to invest, are the things that will increase the power of the AI, because if this AI can discover how to use antimatter in a very simple way (just 5 years after it begans), then your investments are a waste of time and resources.

All investments are based on risk.. if the risk of failure is high then nobody will risk to put a lot of effort and resources in that. (your product will not see the light because it will be outdated by other products, or it will not be needed due other discoveries of how to dodge that particular problem in other way)

The AI will be very smart, however this only help so much you will also need experiments and experience.

Link to comment
Share on other sites

What are you talking about is the discovery of the two main theories of the last century, as Relativity and Quantum Mechanics, but that is not equal to progress.

Theories:

https://en.wikipedia.org/wiki/Timeline_of_fundamental_physics_discoveries

Progress:

http://jgiudice.tripod.com/history/history-timeline.htm

You can see how progress in the last half of this century is much bigger than 1900 to 1960.

Just mentioning Internet, change everything.

You have two effects, improvements and cost increases, at some time the cost increases faster than the improvements and progress in the field slow to an crawl, new fundamental discoveries might relive the field again.

I am having a kind of deja vu with this, I guess we already had a similar discussion in the past :)

You dont need fundamental discoveries each year, we still harvesting the benefics of relativity and quantum mechanics, and we are not even close to use the full of these two theories yet.

The AI will be very smart, however this only help so much you will also need experiments and experience.

I dont understand, you said that we would need experiments and experience to create an IA? or the AI will need experiments and experience to progress?

Edited by AngelLestat
Link to comment
Share on other sites

Angel, frankly, you can argue either direction. The speed of human transit jumped enormously from 1900 to 1960 - that's from horses and trains to the early jet age. Due to nasty little laws of physics, it's inconvenient to go faster than sound as it consumes too much fuel and makes too much noise and costs too much to construct an aircraft capable of doing it.

On the other hand, yeah, 1960 computers were experimental toys and few existed, engineers used slide rules, til now. And, computer provide a way to bypass jet speed limits. We aren't quite there yet, but telepresence signals are lightspeed. If you could do every task you can do locally just as well remotely, it doesn't matter if you have to wait hours to fly there.

Anyways, arguing aside, the common factor during the last few thousand years of progress is that human brains are, at best, very slightly better than the average human's brain during the time of the Romans. If we can eliminate human brains as the bottleneck, well, we ain't seen nothing yet. It is physically possible to tear apart planets into self replicating machinery, at least the solid portions, and probably possible to build antimatter fueled starships.

Link to comment
Share on other sites

I'm sorry for referring to the first post after 5 pages but I found it hard to keep track of the discussion (language barrier).

@AngelLestat

First something about syntax:

NeuNet - The common abbreviations for neuronal networks are "NN" or "ANN" (artificial neuronal network).

IA - I read this several times in this thread. What is meant by that? Or is it just a typo for "AI"?

To see if these neural networks really show intelligence traits in their results and what it is needed to achieve consciousness, first we need to find a better definition of what we call intelligence and consciousness.

That's so true! Currently nobody can give us a proper definition of what intelligence and consciousness is. All the "intelligent"* programs and machines are IMO not intelligent. They are a set of algorithms arranged in a clever way, nothing more.

* According to my vague understanding of "intelligence".

Taking a look to deepmind papers and last breakthrough in neuroscience with the different brain mechanism, we are close to create an algorithm that would learn in a similar way as the brain; this does not mean that it needs to be equal, just needs to work.

We already created that: ANNs. They are a approximation of the underlying mechanism in a brain.

What we still can't do is to simulate several millions of neurons and their interactions (we lack the hardware). We also lack the knowledge how to link the neurons in a sensible manner. In our brain neurons form groups which more and less interact with other groups. Several of these groups form a brain region which specializes in different tasks, i. e. there's a region for face recognition, one for short time memory, one for long time memory, etc. There are regions for specific tasks but there are also regions for "general processing" which don't specialize on specific tasks. All the regions work in a very different manner and we still don't know how that works.

If you want to know more about how our brain works, have a look at this YouTube channel: https://www.youtube.com/user/TEDtalksDirector/search?query=brain

So, when will this [the singularity] happen?

They made this same question to many scientists and people working in the field in 2012; the average answer was by 2040, but many of those specialist was not even able to predict the grade of success that deep learning has today in just a period of three years.

Elon Musk said it may happen in 5 to 10 years. if I have to make a prediction, I would say that in 9 to 14 years, even 14 years looks like an eternity at this accelerating rate.

Although it's impossible to predict it, I would guess 100+ years.

When studying computer sciences I also took some classes in AI and what I found there in terms of progression was disappointing. In the last 30 years there was no huge progress. All the "intelligent" stuff our computers do today already existed back then. They just didn't have the hardware. IMO the only field where I can see a bit of progression are semantic webs. A semantic web is a kind of knowledge database which can describe the world almost like we can do, even in a vague way.

It arranges information in concepts and relations between the concepts. For example:

There's the concept of an elephant. It has relations to the concept of the color grey, to the concept of having ears, to the concept of being a mammal and several more. If you compare a real-life object to the concept of an elephant the object has to match all the relations or it won't be identified as an elephant.

There are three problems with semantic webs:

1. You need to build a comprehensive database. Google is already working on it. Their bots scan the web to identify information posted on websites and inputs that into a database.

2. You need strategy to deal with incomplete, vague and contradicting information. What if an elephant is white (= is an albino)? What if the elephant misses an ear due to an attack of predators? Is this thing still an elephant? And what does "grey" means? If we define grey as that the RGBs values of a color are all equal, i. e. (128,128,128), does that mean that (1,1,1) is grey too? And what about (127,128,128)?

3. You need a ontology sto describe information and a solving engine to compare the database with a given set of data (i. e. the real-life elephant). OWL Full is the best and most complete there is. The problem is it can't be solved by computers.

If you want to see a semantic web and an ontology in action, visit Google, search for Albert Einstein and look on the right handside. All the information about him in that box came from that.

My conclusion is that there's still much to be done before we can build a true AI. There are all kinds of smart algorithms but nobody combined them in a single program. We also lack the hardware to run the AI. Also it's questionable if such an AI can develop a consciousness from that. And we only have a vague idea what intelligence and consciousness is.

If there will be a singularity humans will be the cause because AIs are too dumb and will stay like that for quite while.

Edited by *Aqua*
Link to comment
Share on other sites

Angel, frankly, you can argue either direction. The speed of human transit jumped enormously from 1900 to 1960 - that's from horses and trains to the early jet age. Due to nasty little laws of physics, it's inconvenient to go faster than sound as it consumes too much fuel and makes too much noise and costs too much to construct an aircraft capable of doing it.

Find a source where it said that the human progress was bigger before than now.

Yeah you had Jets.. so? how many things new we have all months these recent years?

What about genetics, computers for everyone, increase in communications and bandwidth, nanotechnology (with lots of fields), etc.

Jet is only one, if you start to count one by one, even if some are not as significative as jet, you still will have 10 to 1.

Anyways, arguing aside, the common factor during the last few thousand years of progress is that human brains are, at best, very slightly better than the average human's brain during the time of the Romans. If we can eliminate human brains as the bottleneck, well, we ain't seen nothing yet. It is physically possible to tear apart planets into self replicating machinery, at least the solid portions, and probably possible to build antimatter fueled starships.

Source? From my understanding of evolution and many sources that I read, the human brain slightly change in the last 50000 years.

You maybe are confuse due that today childrens seems more intelligent, but that is an illusion.

Now childs receive more estimulation and info at all times, that helps them to learn faster.

New technology sometimes is harder for adults, but childrens get used super fast, that is because childrens born with much more neurons and connections than an adult, so their brains are in full learning process.

We by other hand, we need to "delete" the things we know and learn the new ways.

Take a look to all those Feral childs, they dont seem more intelligent than a dog.

First something about syntax:

NeuNet - The common abbreviations for neuronal networks are "NN" or "ANN" (artificial neuronal network).

IA - I read this several times in this thread. What is meant by that? Or is it just a typo for "AI"?

I appreciate, I corrected the AI issue, but about the correct abbreviations for ANN, I personally always hate abbreviations because it makes english a lot harder for me, and my post is so long, that people would need to find for its source definition in the whole post.

That's so true! Currently nobody can give us a proper definition of what intelligence and consciousness is. All the "intelligent"* programs and machines are IMO not intelligent. They are a set of algorithms arranged in a clever way, nothing more.

* According to my vague understanding of "intelligence".

You watched the Michio Kaku video explanation in the link?

He makes a good job, it has a lot more sense than those IQ test that only depends on practice.

Intelligence is almost everything input that is processes to produce an outcome.

Lets say you have a primitive fish eye, it can only detect the absence of light (a shadow for example), so it triggers a signal which will activate a muscle to dodge a possible attack or to find warm waters..

That is intelligence.. but you have a lot of different levels of intelligence, those that takes desicions in base of its position and its pray, or high level of intelligence if you make predictions about your future, or you calculate many different possibilities o solutions to the same problem.

To define conscience may be harder, for example "the knowledge that an individual has of his own existence, their statements and their actions"

We already created that: ANNs. They are a approximation of the underlying mechanism in a brain.

I know, but we dont understand yet the role of different brainwaves alpha, beta, delta, theta has over the neural network.

Some can be only related to control chemical processes inherent to the neuron which may or not be necesary for ANN.

There are already ANN that can be run it in reverse mode.

What we still can't do is to simulate several millions of neurons and their interactions (we lack the hardware). We also lack the knowledge how to link the neurons in a sensible manner. In our brain neurons form groups which more and less interact with other groups. Several of these groups form a brain region which specializes in different tasks, i. e. there's a region for face recognition, one for short time memory, one for long time memory, etc. There are regions for specific tasks but there are also regions for "general processing" which don't specialize on specific tasks. All the regions work in a very different manner and we still don't know how that works.
One thing is simulate, and the other find a way to do the same.. Electronics does not need to be equal to chemical neurons, so when you "simulate" you are forcing electronics to behave in a way that is not the best for it.

You can train very massive ANN with an incredible amount of data compared to the brain in a short time.

Even today, computers learns to recognize all life objects faster than kids, without human intervention.

In my previous post I also explain that neurons can not make long connections to any other neuron in the brain, thats way some intermediate neurons may not be adding nothing in the brain. But an ANN can interconnect any virtual neurons with any other.

If you want to know more about how our brain works, have a look at this YouTube channel: https://www.youtube.com/user/TEDtalksDirector/search?query=brain

I will check it.

Although it's impossible to predict it, I would guess 100+ years.

But remember that all AI experts and people working in the field said an average of 25 years from now.

I was checking today the news about quantum computers, and China, Microsoft, Google, all are playing hard to accomplish that.

Just a 45 qubit quantum computer will have the power of the faster computer of the planet now. 100 qubit computer would be more than trillions times faster.

Of course more qubits you add more extra qubits you need to add to deal with errors. But is incredible how much potential it is. Microsoft is already designing new encryption techniques, to avoid future hack damages with quantum computers.

When studying computer sciences I also took some classes in AI and what I found there in terms of progression was disappointing. In the last 30 years there was no huge progress. All the "intelligent" stuff our computers do today already existed back then. They just didn't have the hardware. IMO the only field where I can see a bit of progression are semantic webs. A semantic web is a kind of knowledge database which can describe the world almost like we can do, even in a vague way.
But AI base on old binary software never will achieve true AI. But ANN as we see, makes huge changes in that aspect.

It changes everything.

It arranges information in concepts and relations between the concepts. For example:

There's the concept of an elephant. It has relations to the concept of the color grey, to the concept of having ears, to the concept of being a mammal and several more. If you compare a real-life object to the concept of an elephant the object has to match all the relations or it won't be identified as an elephant.

That is something than ANN can do.

There are three problems with semantic webs:

1. You need to build a comprehensive database. Google is already working on it. Their bots scan the web to identify information posted on websites and inputs that into a database.

2. You need strategy to deal with incomplete, vague and contradicting information. What if an elephant is white (= is an albino)? What if the elephant misses an ear due to an attack of predators? Is this thing still an elephant? And what does "grey" means? If we define grey as that the RGBs values of a color are all equal, i. e. (128,128,128), does that mean that (1,1,1) is grey too? And what about (127,128,128)?

3. You need a ontology sto describe information and a solving engine to compare the database with a given set of data (i. e. the real-life elephant). OWL Full is the best and most complete there is. The problem is it can't be solved by computers.

Here you need a very big ANN, that includes words, images and sounds.

And yes, is already solve it by ANN, see the OP post the "spoiler" sections, as examples of ANN.

You will be surpriced.

If you want to see a semantic web and an ontology in action, visit Google, search for Albert Einstein and look on the right handside. All the information about him in that box came from that.

Those things comes from special servers data they have, but I know all google efforts to improve its robot engines and search results.

I am a web programmer with deep knowledge on SEO.

My conclusion is that there's still much to be done before we can build a true AI. There are all kinds of smart algorithms but nobody combined them in a single program. We also lack the hardware to run the AI. Also it's questionable if such an AI can develop a consciousness from that. And we only have a vague idea what intelligence and consciousness is.

If there will be a singularity humans will be the cause because AIs are too dumb and will stay like that for quite while.

They just need to crack consciousness, it may be a very simple algorithm or a complex. But once they do it, they already had the compute power to create a Hard AI.

Link to comment
Share on other sites

In my previous post I also explain that neurons can not make long connections to any other neuron in the brain, thats way some intermediate neurons may not be adding nothing in the brain. But an ANN can interconnect any virtual neurons with any other.

Actually, that's not quite true. Association fibres connect distant parts of the brain to one another.

But remember that all AI experts and people working in the field said an average of 25 years from now.

I was checking today the news about quantum computers, and China, Microsoft, Google, all are playing hard to accomplish that.

Just a 45 qubit quantum computer will have the power of the faster computer of the planet now. 100 qubit computer would be more than trillions times faster.

Of course more qubits you add more extra qubits you need to add to deal with errors. But is incredible how much potential it is. Microsoft is already designing new encryption techniques, to avoid future hack damages with quantum computers.

Genuine question - do we actually know this? I've read that there are some tasks (factoring very large numbers for example) that are only feasible on a quantum computer (in this case, using Shor's algorithm), but can we say that this will apply for all tasks? I'm wondering if quantum computers might be analogous to parallel computers here. Some computational tasks are trivially parallel and relatively easy to implement on parallel processors. Others are not and in the end the speed gains you see from the parallel processing are negated by the additional programming complexity required to run them in parallel. Hope that makes sense.

They just need to crack consciousness, it may be a very simple algorithm or a complex. But once they do it, they already had the compute power to create a Hard AI.

That assumes that we can reduce consciousness to an algorithm. What if we can't? What if it's an emergent property of a particular system of NNs (or ANNs) and we have no way of figuring it out in advance?

Link to comment
Share on other sites

I appreciate, I corrected the AI issue, but about the correct abbreviations for ANN, I personally always hate abbreviations because it makes english a lot harder for me, and my post is so long, that people would need to find for its source definition in the whole post.;)
You watched the Michio Kaku video explanation in the link?

Nope. He seems to be very famous in the USA. I'm always cautious of the opinions of VIPs because you never know if they present their (professional) opinions or if the TV crew told them what to say or if their opinions are taken out of context.

I'll watch it in a minute.

you have a lot of different levels of intelligence, those that takes desicions in base of its position and its pray, or high level of intelligence if you make predictions about your future, or you calculate many different possibilities o solutions to the same problem.
I don't agree with your definition of intelligence. What you said would make an automatic light with a motion sensor intelligent (which IMO isn't intelligent).

I won't define intelligence because I don't think I'm smart enough for that. But the definition should include at least:

- ability to make decisions (= plan a course of actions)

- evaluation of surroundings and self (= ability to create an abstract representation of the world)

- consideration of past memorys (= learning)

- consideration of possible futures (= prediction)

To define conscience may be harder, for example "the knowledge that an individual has of his own existence, their statements and their actions"
Sounds like the definition of "memory". ;)

I won't try to define consciousness. I've no idea where to start. Maybe the ability to distinguish between self and surroundings? Having an opinion about things and actions?

I know, but we dont understand yet the role of different brainwaves alpha, beta, delta, theta has over the neural network.
Afaik brainwaves are just noise of the neural activity.

Analogue would be a room full of loudly talking people. You wouldn't be able to distinguish the talk of a single person but you can hear the noise they all make together.

There are already ANN that can be run it in reverse mode.
This was invented 30+ years ago. There are a lot of different ANNs and some of them allow that. There are also some which can act like memory, some have the ability to modify themselves, etc.

ANNs are very good at pattern recognition that's why they are used in automated image processing (face recognition, OCR [optical character recognition = reading text on an image] etc.). But they are bad at other things. For example decision making: I don't see a way how it can make a plan of actions

Electronics does not need to be equal to chemical neurons, so when you "simulate" you are forcing electronics to behave in a way that is not the best for it.
Optimizations can be done later. Currently it's more important to find a mathematical description of how the brain works. When we have that we can make an artifical brain with our technology. Remember that ANNs are only an approximation of biological NNs. They don't work exactly the same as the original. And maybe that difference is the key to real intelligence.
You can train very massive ANN with an incredible amount of data compared to the brain in a short time.

Even today, computers learns to recognize all life objects faster than kids, without human intervention.

Yes and no.

Yes, computers can calculate a lot in a short amount of time but ANNs actually learn very slowly compared to our brain.

For example this is a very simple perceptron (a kind of ANN) I wrote a few years ago:



// simple ANN with a 2-dimensional layer of input neurons and a 1-dimensional layer of output neurons
public final class ANN implements Serializable {

private static final long serialVersionUID = -1832083414154078932L;

private final float[][] weight; // weights between input and output neurons
private final float[] threshold; // threshold of output neurons
private final boolean[] output; // output neurons, results are in there
private final float learningRate = 0.2f; // how fast it should learn

// constructor
// x*y -> number of input neurons
// z -> number of output neurons
public ANN(final int x, final int y, final int z) {

// randomize weights
weight = new float[x][y];
for (int i = 0; i < x; i++) {
for (int j = 0; j < y; j++) {
weight[i][j] = (float) Math.random();
}
}

// randomize thresholds
threshold = new float[z];
output = new boolean[z];
for (int k = 0; k < z; k++) {
threshold[k] = (float) Math.random();
output[k] = false;
}
}

// lets the ANN run once
public boolean[] fire(final boolean[][] input) {
float net; // net activity

// calculate for each output neuron if it should activate
for (int k = 0; k < output.length; k++) {
net = 0f;

// calculate net activity by checking all input neurons and considering weights
for (int i = 0; i < input.length; i++) { // input.length = array width
for (int j = 0; j < input[0].length; j++) { // input[0].length = array height
if (input[i][j]) {
net += weight[i][j]; // update net activity
}
}
}

// calculate activation of the current output neuron, uses simple threshold comparison
output[k] = net > threshold[k] ? true : false;
}

return output;
}

// use this to let the net learn
// provide a set of input data and the desired output
public void learn(final boolean[][] input, final boolean[] desiredOutput) {
float delta = 0f; // error rate

// the ANN needs to process the data once
fire(input);

for (int k = 0; k < output.length; k++) {
// apply delta rule to check for errors
delta = (desiredOutput[k] ? 1f : 0f) - (output[k] ? 1f : 0f);

if (delta != 0f) { // if there's an error the ANN needs to learn
// update all weights
for (int i = 0; i < input.length; i++) {
for (int j = 0; j < input[0].length; j++) {
weight[i][j] += learningRate * (input[i][j] ? 1f : 0f) * delta;
}
}

// update threshold
threshold[k] -= learningRate * delta;
}
}
}
}
import java.io.Serializable;

This perceptron can be used for OCR. It needs 200 to 1000 (!) learning iterations until it can distinguish between all letters in the alphabet.

A human child needs 1 iteration. He/she immediately see the differences between all letters even if it can't read and never saw letters before.

Just a 45 qubit quantum computer will have the power of the faster computer of the planet now. 100 qubit computer would be more than trillions times faster.
Fast at which work compared to what?

Quantum computers are only fast at very specific tasks. A simple 1+1 calculation is likely to be slower compared to electronical computers. I'm not sure if a quantum computer can actually perform basic calculations.

There was a course at my university about quantum computing. I didn't attend it. The related math is crazy!

But AI base on old binary software never will achieve true AI.
Why?

As long as you can make a math formula describing something it can be calculated by computers.

[related to the elephant example]

That is something than ANN can do.

Here you need a very big ANN, that includes words, images and sounds.

And yes, is already solve it by ANN, see the OP post the "spoiler" sections, as examples of ANN.

You will be surpriced.

Nope. It can't do that.

All the examples in the spoiler maybe use an ANN for pattern recognition but all the stuff (interpretations, evaluations, etc.) which comes after that are done by other algorithms.

They just need to crack consciousness, it may be a very simple algorithm or a complex. But once they do it, they already had the compute power to create a Hard AI.
"Just" is a nice word for describing a monumental task. ;)

Let's say scientists somehow understand how consciousness works. A conscious AI will still be dumb.

Does an AI even need a consciousness? Do we want an AI with a consciousness?

I say no to both questions. I would be happy if a slave (there's no other use for an AI) doesn't have an opinion about being a slave.

It's okay. As long as we know you call them NeuNets we'll be fine.
Edited by *Aqua*
Link to comment
Share on other sites

Actually, that's not quite true. Association fibres connect distant parts of the brain to one another.

I search about this and I dint find nothing, if you have a link that show how these connections can form and appear in base to a need that was not planned by genetics, they I will be agree with you.

What is the process that might allow such long connection? To that we need to assume that the connection it self is intelligent.

Genuine question - do we actually know this? I've read that there are some tasks (factoring very large numbers for example) that are only feasible on a quantum computer (in this case, using Shor's algorithm), but can we say that this will apply for all tasks? I'm wondering if quantum computers might be analogous to parallel computers here. Some computational tasks are trivially parallel and relatively easy to implement on parallel processors. Others are not and in the end the speed gains you see from the parallel processing are negated by the additional programming complexity required to run them in parallel. Hope that makes sense.

Yeah, it depends on the number of quantum algorithms you develope as remplacement of the most usefull binary algorithtms.

We code a few.. like search algorithms, factoring, etc.

For example quantum algorithms are very good for simulations, as chemical simulations or many other fields.

There are also perfect for ANN, which is the most important here.

Also an structure base on Quantum mechanics to define the ANN structure, it would not need new algorithms (because it does not compute), and it would be much faster and easy to make than any ANN running in a Quantum computer.

That assumes that we can reduce consciousness to an algorithm. What if we can't? What if it's an emergent property of a particular system of NNs (or ANNs) and we have no way of figuring it out in advance?

Yeah, that is one of the big risks, but it does not change the future being that way or not. ANN can increase by a lot the progress, so eventually a true consciousness will be created (there is not escape of that destiny), you might think that it would be easy to control it if happen this way, but not.

That is why I think that some of the AI specialist interviewed are shown as calm and not alarming of the AI and singularity implications.

Because they know these facts:

1-There is no need to scare people, because even if they don’t work in this field, someone else will, trying to stop this will be like trying to stop human progress, is impossible.

2-They don’t want to work with plenty of people holding signs outside their offices, or in the worst case scenario this situation; who knows how many Sarah Connor are out there.

3-They are getting old, if there is chance to defeat death, they will take it, the risk does not matter, because it will happen anyway, even with global awareness and all safe measures taken, a hard IA development cannot be delayed more than 3 years at that point.

The people who thinks they can experiment with a hard IA structure (super intelligence in development) and keep it under control are delusional, is an insult to our own intelligence.

I don't agree with your definition of intelligence. What you said would make an automatic light with a motion sensor intelligent (which IMO isn't intelligent).

I won't define intelligence because I don't think I'm smart enough for that. But the definition should include at least:

- ability to make decisions (= plan a course of actions)

- evaluation of surroundings and self (= ability to create an abstract representation of the world)

- consideration of past memorys (= learning)

- consideration of possible futures (= prediction)

You have a very big problem with that, when you cross the line? How do you call the lower levels?

Maybe a dog fulfill or those requirements but it only makes very primitive predictions that are impossible to detect.

You need to take intelligence as a Unit of measument.. The same than "heat".

The cold word doesn´t have any meaning in physsics, only heat.. This heat may be 4 kelvins or 1 millions kelvins.

Our intelligence is not different, is such grow in complexity, the amount of variables, sense inputs, knowledge that use to produce an outcome or many outcomes. Is very hard to cross a line here.

Sounds like the definition of "memory". ;)

I won't try to define consciousness. I've no idea where to start. Maybe the ability to distinguish between self and surroundings? Having an opinion about things and actions?

Yeah I am agree that we still dont have a definition for consciousness, so it would be very hard to detect it if we achieve it some day.
This was invented 30+ years ago. There are a lot of different ANNs and some of them allow that. There are also some which can act like memory, some have the ability to modify themselves, etc.

yeah that is true.

ANNs are very good at pattern recognition that's why they are used in automated image processing (face recognition, OCR [optical character recognition = reading text on an image] etc.). But they are bad at other things. For example decision making: I don't see a way how it can make a plan of actions

There are good to relate information, the same as our own neurons. About decision making is the same, a search image ANN does decision making to produce an outcome, it calculate the answer with more chances to be right.

That can be apply it to anything, some examples may just increase the complexity of the ANN.

If you are talking about conscious decision, then is a different case. By its already prove it, that most (for not said all) of our decisions are done it by the unconscious, then our consciousness creates the illusion of our choice. Here Sam Harris explain

Optimizations can be done later. Currently it's more important to find a mathematical description of how the brain works. When we have that we can make an artifical brain with our technology. Remember that ANNs are only an approximation of biological NNs. They don't work exactly the same as the original. And maybe that difference is the key to real intelligence.

Agree, but once you know how it works (or it can happen before), then you can not said it will be a limit than electronics or quamtum computers can not overcome in its own way.

And you dont need to simulate the brain to do that, we have a lot of clues of how the brain works which mechanism can be translated to electronics and test them. Now that physicists are join to the brain problem, they will find a way to break down all mechanism to few simple laws.

Yes and no.

Yes, computers can calculate a lot in a short amount of time but ANNs actually learn very slowly compared to our brain.

For example this is a very simple perceptron (a kind of ANN) I wrote a few years ago:

"Code"

This perceptron can be used for OCR. It needs 200 to 1000 (!) learning iterations until it can distinguish between all letters in the alphabet.

A human child needs 1 iteration. He/she immediately see the differences between all letters even if it can't read and never saw letters before.

I dint take a deep look to the code, but there are lots of new techniques to improve the efficiency of ANN, but I glad that you experiment with that.

But your conclusion is wrong, a human needs more than those iterations to learn the difference.. you are measuring wrong.

First I really doubt that a child can see the differences between the letters with a first iteration, first there is not just thing as first iteration, your brain already has a visual pattern structure of edges and other concepts to help you to categorize that.

Second your visual does not last 1 milisecond, so you are compariing all letters between them in a continuous mode.

Also we also need to learn a concept many times before we develope a strong synapse.

Fast at which work compared to what?

Quantum computers are only fast at very specific tasks. A simple 1+1 calculation is likely to be slower compared to electronical computers. I'm not sure if a quantum computer can actually perform basic calculations.

There was a course at my university about quantum computing. I didn't attend it. The related math is crazy!

I answer this to KSK above.

Why?

As long as you can make a math formula describing something it can be calculated by computers.

Not sure if we are talking of the same things, I thought that you was refering to old AI software structure.

Like: if someone ask for X, answer Y.

Its impossible to achieve complex intelligence with that, because all the rules you need to do increase exponentially.

This is similar to all the earlier works in image recognition, where they had hundred of engineers making new rules to accomplish almost nothing.

Nope. It can't do that.

All the examples in the spoiler maybe use an ANN for pattern recognition but all the stuff (interpretations, evaluations, etc.) which comes after that are done by other algorithms.

If you want to show the results in a normal computer you will need normal algorithms to that. But not sure what is your point with this?

If you have a robot with many sensors that only needs to take actions in base the enviroment with a self learning mechanism, then you can achieve all that with just ANN, of course some of the algoritms to simulate a ANN are normals if they run in a normal computer. That can change if they run in a full ANN quantum structure.

"Just" is a nice word for describing a monumental task. ;)

Let's say scientists somehow understand how consciousness works. A conscious AI will still be dumb.

not way to guarantee that... maybe our consciousness is very limited by factors than an artificial consciousness will not.

Does an AI even need a consciousness? Do we want an AI with a consciousness?

I say no to both questions. I would be happy if a slave (there's no other use for an AI) doesn't have an opinion about being a slave.

Someone will do it just because we can do it. As experience, for need, or for many other purposes.
How to survive the singularity:

1) Convert to Hinduism

2) Die

3) Repeat until reincarnated as the first sentient doom machine

4) Gloat

Not sure if the soul will be compatible with that hardware :)

Link to comment
Share on other sites

I search about this and I dint find nothing, if you have a link that show how these connections can form and appear in base to a need that was not planned by genetics, they I will be agree with you.

What is the process that might allow such long connection? To that we need to assume that the connection it self is intelligent.

Here you go:

https://en.wikipedia.org/wiki/Association_fiber

http://teachinganatomy.blogspot.co.uk/2012/12/WhiteMatter-Cerebrum.html

https://books.google.co.uk/books?id=QIZ668QEH0wC&pg=PA82&lpg=PA82&dq=association+fiber&source=bl&ots=Ee6iKXOaNR&sig=PwQXKVLiyyaKZ6C7UJyig7fzfuQ&hl=en&sa=X&ved=0CEAQ6AEwBTgKahUKEwjMztmBrMzHAhUCkw0KHfAeALs#v=onepage&q=association%20fiber&f=false

Note that the fibres are bundles of axons, so I would interpret them as being bundles of very long cells rather than chains of shorter cells.

Yeah, it depends on the number of quantum algorithms you develope as remplacement of the most usefull binary algorithtms.

We code a few.. like search algorithms, factoring, etc.

For example quantum algorithms are very good for simulations, as chemical simulations or many other fields.

There are also perfect for ANN, which is the most important here.

Could you provide a link for this? I'm not necessarily disagreeing with you but I can't think why quantum algorithms would be so particularly good for ANNs.

*snip* The people who thinks they can experiment with a hard IA structure (super intelligence in development) and keep it under control are delusional, is an insult to our own intelligence.

Actually, this is one of the biggest disagreements I have with the notion of a Singularity. It's never going to depend solely on a notional machine super-intelligence - it's always going to have a human component to it. Unless we put those superintelligences in charge of the necessary machinery to actually make anything then they're not going to be able to do much. To use another T2 analogy, unless we're actually stupid enough to put an AI in charge of strategic nuclear weapons, we have no need to fear Judgment Day.

It doesn't matter how many great ideas or revolutionary concepts the AI comes up with, unless it can persuade enough humans to go along with it, those ideas won't ever see the light of day. And goodness knows, humans are bad enough at listening to their own scientists - who says they're going to be any better at listening to a machine intelligence with motives and motivations that they don't (and maybe can't) understand.

Edited by KSK
Link to comment
Share on other sites

Another way it might turn out is how nuclear power turned out from the 1950 predictions to today. Yes this is in part political however the political part is purely an western issue not USSR/Russia or China.

Also Intel's cpu roadmap from 2005 or something where they assumed +10GHz cpu.

Fallout game series is based on the 1950 predictions of the future, steampunk on the 1880 predictions.

In short we do not know where the graph start flatten out. modern supercomputers is factory sized like the old intelligent computers only that it fill an building with standard high grade gpu and cpu.

Or any physical system. No matter how much "software", it still needs to run on silicone, or optical cable, or quantum "chips" or whatever. All computers, no matter how advanced, run on... matter. To which there are the same limits our human brains, and nature, already deals with and already finds the "most efficient" solution.

So we either gain computation, at the risk of power use and hard lock ups, or gain the flexibility at the expense of lower computational power. There are no true "perfect" solutions.

Link to comment
Share on other sites

I won't define intelligence because I don't think I'm smart enough for that. But the definition should include at least:

- ability to make decisions (= plan a course of actions)

- evaluation of surroundings and self (= ability to create an abstract representation of the world)

- consideration of past memorys (= learning)

- consideration of possible futures (= prediction)

The Demis Hassabis video that AngleLestat linked to earlier in this thread already shows his computers doing most, if not all, of those things.

This was invented 30+ years ago. There are a lot of different ANNs and some of them allow that. There are also some which can act like memory, some have the ability to modify themselves, etc.

Just because an idea was conceived decades before the technology existed to implement it doesn't mean it is a bad idea... Tsiolkovsky and Goddard's foundational work in rocketry was also done decades before Sputnik was launched.

This perceptron can be used for OCR. It needs 200 to 1000 (!) learning iterations until it can distinguish between all letters in the alphabet.

A human child needs 1 iteration. He/she immediately see the differences between all letters even if it can't read and never saw letters before.

I agree with AngleLestat on this one. I think you seriously under-estimate the number of iterations it takes a human child to spot the difference between letters. Why would childish hand writing with backwards B's and S's be such a cliche if this wasn't true? Also, I challenge you to immediately spot the difference in another languages script? How many iterations would it take you to identify the 47 different letters in the devanagari script?

465px-Chandas_typeface_specimen.svg.png

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...