Jump to content

Everything wrong with our predictions (The Singularity is coming)


AngelLestat

Recommended Posts

Just because an idea was conceived decades before the technology existed to implement it doesn't mean it is a bad idea... Tsiolkovsky and Goddard's foundational work in rocketry was also done decades before Sputnik was launched.

No it doesn't but I think Aqua was making the point to put things in perspective. Firstly, that we've known how to do things with ANNs for a while, so lets not get too excited that they're starting to become popular again. And secondly, that despite the advances in hardware, we haven't progressed quite as much with the underlying theory.

I agree with AngleLestat on this one. I think you seriously under-estimate the number of iterations it takes a human child to spot the difference between letters. Why would childish hand writing with backwards B's and S's be such a cliche if this wasn't true? Also, I challenge you to immediately spot the difference in another languages script? How many iterations would it take you to identify the 47 different letters in the devanagari script?

https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Chandas_typeface_specimen.svg/465px-Chandas_typeface_specimen.svg.png

Depends what you mean. Identifying that one letter is not another (in other words, the differences between them) wouldn't take long at all. Memorising the full set of shapes would take longer, learning the discrete names and sounds for each letter (which is arguably the point where they become useful) would take longer still.

Link to comment
Share on other sites

First I want to say I'm happy about the way the discussion goes. There's no bashing and no unnecessary Terminator references. :)

How to survive the singularity:

1) Convert to Hinduism

2) Die

3) Repeat until reincarnated as the first sentient doom machine

4) Gloat

If we apply theoretical computer science to that the halting problem applies to step 3. Or in layman terms: We don't know if we ever get past step 3. :huh:

For example quantum algorithms are very good for simulations, as chemical simulations or many other fields.

There are also perfect for ANN, which is the most important here.

Could you give an example which quantum algorithms are perfect for ANNs?

I still doubt it is possible.

A biological NN uses analogue values, processed in a continuous way.

An ANN uses digital/binary values, processed in steps.

A quantum computers calculates with energy levels and/or spins or other physical properties of particles, processed instantly(?).

While you can get simulate analogue values using binary values with a bit of rounding, energy levels and spins work in a completely different way as far as I understand them.

Also an structure base on Quantum mechanics to define the ANN structure, it would not need new algorithms (because it does not compute), and it would be much faster and easy to make than any ANN running in a Quantum computer.
Quantums don't work the same way as neurons. Or in a better wording, I don't know of a mechanism which applies to quantum particles which works similiar to neurons and can be used by us.
You have a very big problem with that, when you cross the line? How do you call the lower levels?

Maybe a dog fulfill or those requirements but it only makes very primitive predictions that are impossible to detect.

You need to take intelligence as a Unit of measument.. The same than "heat".

The cold word doesn´t have any meaning in physsics, only heat.. This heat may be 4 kelvins or 1 millions kelvins.

Our intelligence is not different, is such grow in complexity, the amount of variables, sense inputs, knowledge that use to produce an outcome or many outcomes. Is very hard to cross a line here.

I'm not sure I understand you right. What line? What lower levels?

A dog does make plans. For example I usually tries to keep his pack together. When I was a child my dog always chased me and lightly bit me (not to the point of injury) when I tried to run away. He did it to stop me and immediately stopped harassing me when I stopped running away.

This example shows prediction ("he'll be gone" -> weakened pack & "he'll stay if I act" -> pack keeps the way it is), planning (running to me, stopping me, keeping me near the pack) and an abstract view of the world (more specific: the state of his pack & me).

Wolves are known to hunt in packs. There are drivers which drives the prey into traps, a place which is surrounded by other wolves of the pack waiting there. That proves the ability to plan ahead.

There are good to relate information, the same as our own neurons. About decision making is the same, a search image ANN does decision making to produce an outcome, it calculate the answer with more chances to be right.

That can be apply it to anything, some examples may just increase the complexity of the ANN.

What about problems an ANN wasn't trained for? It can't "think" outside of the box and be creative.

I'm under the impression that you didn't completely understand how an ANN works.

There's a set of input neurons. They are fed with sensory information (a picture, sound, etc.). Attached to the input neurons can be 0 or more layers of hidden neurons which are connected in sometimes weird ways. And then there are the output neurons which are either connected to the hidden neurons and/or the input neurons. The state (activated/deactivated) of these neurons represents the conclusion of the ANN to the given input. For example if output neuron #4872 fires and all others didn't, it means that the letter 'a' is recognized.

An artificial neuron works like this: It is connected to at least one other neuron which is either actived or not. The connections to these neurons have a weight value. Weight represents how 'important' the state of the connected neuron is. The artificial neuron now checks each connected neuron while taking the weight into consideration (there are a number of way to do that) and comes up with a value which represents the so called 'net activity'. If said net activity goes above/below/something other (depends on algorithm) a threshold inside the neuron actives (it 'fires').

This is done for each neuron.

It can get quite complex and difficult to do that right. For example if there are neurons which form a loop , do you want to repeat the loop a few times? Or just ignore it?

Btw memory in ANNs is done using loops and process them only once during each iteration and keeping their activity state between two iterations.

And you dont need to simulate the brain to do that, we have a lot of clues of how the brain works which mechanism can be translated to electronics and test them. Now that physicists are join to the brain problem, they will find a way to break down all mechanism to few simple laws.
Clues are not enough. We need facts. Afaik the brain is still mostly not understood. For example they found groups of only about a couple thousands neuron which only have a few connections to the outside of the groups just a few years ago. And they still don't know why they are there and what they do.
But your conclusion is wrong, a human needs more than those iterations to learn the difference.. you are measuring wrong.
Yeah, I was a bit quick writing this.
Not sure if we are talking of the same things, I thought that you was refering to old AI software structure.

Like: if someone ask for X, answer Y.

You mean rule based systems? Ok, we are not talking of the same things. I guess I couldn't put my thought in the correct words. (And by now I have forgotten what I wanted to write back then. -_-)
But not sure what is your point with this?

If you have a robot with many sensors that only needs to take actions in base the enviroment with a self learning mechanism, then you can achieve all that with just ANN, of course some of the algoritms to simulate a ANN are normals if they run in a normal computer. That can change if they run in a full ANN quantum structure.

What I tried to say is that ANNs are not the pinnacle of AI programming. IMHO evolutionary programming is the way which leads to a real thinking AI. While ANNs are static (they have neurons, they have connections between the neurons - that will never change) evolutionary algorithms can change everything about themselves. There's nothing static anymore which means they can adapt to unknown situations and can come up with creative solutions to a problem. It's even possible that they come up with their own kind of ANN to solve specific problems.

Here's an

of an evolutionary algorithm forming a morphology and at the same time developing techniques to move around and perform special tasks. An ANN would never be able to do that on it's own.
[definition of "intelligence"]

The Demis Hassabis video that AngleLestat linked to earlier in this thread already shows his computers doing most, if not all, of those things.

I want to point out that my list contains stuff which IMO should be included in an "intelligence" definition. This list is by no means complete. An intelligent being does more than what I wrote, for example being creative.

Edited by *Aqua*
Link to comment
Share on other sites

What I tried to say is that ANNs are not the pinnacle of AI programming. IMHO evolutionary programming is the way which leads to a real thinking AI. While ANNs are static (they have neurons, they have connections between the neurons - that will never change) evolutionary algorithms can change everything about themselves. There's nothing static anymore which means they can adapt to unknown situations and can come up with creative solutions to a problem. It's even possible that they come up with their own kind of ANN to solve specific problems.

I don't think that we were ever talking exclusively about ANNs, were we? We were talking about recent advances in AI in general. This would obviously include ANNs, but isn't limited to just that approach.

I want to point out that my list contains stuff which IMO should be included in an "intelligence" definition. This list is by no means complete. An intelligent being does more than what I wrote, for example being creative.

Again, I believe that we are discussing recent advances in AI and how these may lead to "the Singularity" [dunt dunt dunnn!]. I don't think anyone who's "in the know" would say that human level AI is imminent. True experts in the field like Geoffrey Hinton and Demis Hassabis are still saying that it is likely decades away.

Even so, it is pretty clear that there have been significant advances in the past few years that toss out some of our previously held ideas about how to make computers "smarter".

Link to comment
Share on other sites

But all those as I said are predesigned by genetics. A neuron in our brain it will not automatic make a connection to a distant neuron in the other part of the brain, in case it need to, it would make a coonection path between the adjacent neurons until reach these bridges "association fibers" then cross other neurons until reach the one it needs to.

So all those intermediate neurons are not adding calculations for that particular connection, that is something that can be simplify with electronics, where you can connect any perceptron with any other perceptron.

Could you provide a link for this? I'm not necessarily disagreeing with you but I can't think why quantum algorithms would be so particularly good for ANNs.

http://www.businessinsider.com.au/quantum-computers-will-change-the-world-2015-4

http://www.triniti.ru/CTF%26VM/Articles/Ezhov1.pdf

I did a small explanation in the OP, section "Brain vs Cpu"inside spoiler tag.

Actually, this is one of the biggest disagreements I have with the notion of a Singularity. It's never going to depend solely on a notional machine super-intelligence - it's always going to have a human component to it. Unless we put those superintelligences in charge of the necessary machinery to actually make anything then they're not going to be able to do much. To use another T2 analogy, unless we're actually stupid enough to put an AI in charge of strategic nuclear weapons, we have no need to fear Judgment Day.

You know why from all ANN models, Deep Learning is so powerfull? Because eliminate many of the human components. Human component = bottleneck.

Also we can not know what ANN does in complex systems, we can only study its answers, no how it reach those answers. If we add that we can not be sure when it reach conscience or not, then we could be testing something that might be very intelligent and we dont know nothing about it.

What if it play the dumb card? Or it provide us a the exact answers to convince us to do something for it or so subjective and indirected that is impossible to notice it.

Or give us a discovery that we can not reject, but inside that discovery as a troyan, hire to our knowledge level, there is its way out.

And those are only the ones I can imagine, what about all those that a superintelligence can imagine?

It will be like the creators of windows 95 against all hackers of the world

It doesn't matter how many great ideas or revolutionary concepts the AI comes up with, unless it can persuade enough humans to go along with it, those ideas won't ever see the light of day. And goodness knows, humans are bad enough at listening to their own scientists - who says they're going to be any better at listening to a machine intelligence with motives and motivations that they don't (and maybe can't) understand.

If is intelligent enoght can do wherever it wants to, and the only way to prove how intelligent it is, is by its answers that it will carefull choose.

Heh, that looks like a visual analog to a tongue-twister.

First I want to say I'm happy about the way the discussion goes. There's no bashing and no unnecessary Terminator references. :)

ops, I already did one answering to KSK in the previous post :) "sarah connor"

Could you give an example which quantum algorithms are perfect for ANNs?

Read above, KSK did the same question.

I'm not sure I understand you right. What line? What lower levels?

Wolves are known to hunt in packs. There are drivers which drives the prey into traps, a place which is surrounded by other wolves of the pack waiting there. That proves the ability to plan ahead.

If you said that intelligence is that thing that only humans had, then what is that thing monkeys have? Even if they can speak with signs..

Or if monkeys and wolf are intelligent, then why not the other animals?

How do you call the decision making of the other animals? instint? no...

So is impossible to draw a line to see where you call something intelligent and when not.

What about problems an ANN wasn't trained for? It can't "think" outside of the box and be creative.

You have genetics algoritmhs in combination with ANN and different structures that can deal with that.

This is done by somebody random in internet who just wanted to experiment.

This second video not sure how the code is done.. it use music rules with something more..

We may think that we are very creative, but we are not, we just add grain of sands to a big pile of patterns that we choose.

I'm under the impression that you didn't completely understand how an ANN works.

You should read the full OP some time :)

Deep Learning algorithm works different, there are many different learning techniques for ANN and ways to change its structure, or fusion hiden ANN together to reduce drag errors on binary calculations (when they dont find the global minimun of a function).

Clues are not enough. We need facts. Afaik the brain is still mostly not understood. For example they found groups of only about a couple thousands neuron which only have a few connections to the outside of the groups just a few years ago. And they still don't know why they are there and what they do.

Make an human kind of AI, with consciousness and emotions as "curiosity, fear, pleasure, etc" is super hard. Mainly because all those emotions are already coded by genetics and develope in a small part by experiences.

But something really scary, is to make some AI without those things, and those "I guess" are the most easy to make.

What I tried to say is that ANNs are not the pinnacle of AI programming. IMHO evolutionary programming is the way which leads to a real thinking AI. While ANNs are static (they have neurons, they have connections between the neurons - that will never change) evolutionary algorithms can change everything about themselves. There's nothing static anymore which means they can adapt to unknown situations and can come up with creative solutions to a problem. It's even possible that they come up with their own kind of ANN to solve specific problems.

That is what I said it, evolution techniques makes them not static, there is also other techniques to change that.

Here's an
of an evolutionary algorithm forming a morphology and at the same time developing techniques to move around and perform special tasks. An ANN would never be able to do that on it's own.

Yeah, check my links in the OP too, the one that learns to walk for muscles. I remember when engineers take like 5 years to come out with an algorithm to walk or get balance.

Link to comment
Share on other sites

A child getting letters backwards is an error/bug, not a feature. I can tell you the difference between a b and a p. However, my wires occasionally get crossed and put them backwards (actually not for me, mine is with different letters, but many have that problem in expression and sometimes in reading. That is a language problem, not a visual or observational one and not a computational one. Show the same people a cat/dog and they get it right every time).

Thus your examples are way way under researched and your understanding of the problems and systems is not deep enough to comment on the actual mechanical or mathematical functions.

Eg, "we will have fusion power in 10 years time", or "we will have FTL travel in 10 years time" or "we will have AI in 10 years time". The problems are not the ones we think they are, and are harder than they seem.

Link to comment
Share on other sites

But all those as I said are predesigned by genetics. A neuron in our brain it will not automatic make a connection to a distant neuron in the other part of the brain, in case it need to, it would make a coonection path between the adjacent neurons until reach these bridges "association fibers" then cross other neurons until reach the one it needs to. So all those intermediate neurons are not adding calculations for that particular connection, that is something that can be simplify with electronics, where you can connect any perceptron with any other perceptron.

Well I'm not sure we know that the intermediate neurone aren't adding anything but I'll concede the point. Association fibres only make particular links between regions of the brain, in general a neuron cannot make an arbitrary connection to any other neuron. Sorry - misread your original question slightly.

http://www.triniti.ru/CTF%26VM/Articles/Ezhov1.pdf

Thank you very much for this link - and have som rep. That does look very interesting and I look forward to reading it!

You know why from all ANN models, Deep Learning is so powerfull? Because eliminate many of the human components. Human component = bottleneck. Also we can not know what ANN does in complex systems, we can only study its answers, no how it reach those answers. If we add that we can not be sure when it reach conscience or not, then we could be testing something that might be very intelligent and we dont know nothing about it.

What if it play the dumb card? Or it provide us a the exact answers to convince us to do something for it or so subjective and indirected that is impossible to notice it. Or give us a discovery that we can not reject, but inside that discovery as a troyan, hire to our knowledge level, there is its way out.

And those are only the ones I can imagine, what about all those that a superintelligence can imagine? It will be like the creators of windows 95 against all hackers of the world.

True - but my point about it still needing humans to carry out it's plans still stands. And I don't think it matters how reasonable something might sound, some people will automatically distrust it (and so not do it) if they know they're talking to an AI.

Link to comment
Share on other sites

I don't think that we were ever talking exclusively about ANNs, were we?
I were under the impression that's the case. Damn language barrier!

Btw, language is a good example where AIs have a lot of problems. Depending on the language it can be very difficult to come up with a set of rules to describe them. Or in other words AIs have problem understanding languages.

The main reason is of course that meanings of words and phrases are context sensitive. As long as an AI doesn't understand the context it won't understand content.

A prime example for that is Japanese which is very context heavy:

Your friend asks: Do you want to go home together?

You answer: [Yes, I want to] Go [together with you].

Your answer can be interpreted in a number of ways but it simply means that you agree to the proposal of you friend.

So far I didn't see a translator or algorithm which gets context dependent meanings right.

But all those as I said are predesigned by genetics.
That's quite a hot topic.

Scientists and philosophes are discussing for hundreds of years if a baby's mind is "pre-programmed" in the womb or not. So the question is if visual recognition of things has to be learned or is the ability already there.

If we apply that to AIs we get the question if they should be pre-programmed with a few rules and/or knowledge or not. What's you opinion about that?

So is impossible to draw a line to see where you call something intelligent and when not.
It just came into my mind: If we can't come up with a definition of "intelligence" could it be that there's no such thing?

In my computer science studies I was always surprised what other people call "intelligent" algorithms. Usually the algorithms are so simple that I call them cleverly designed but not intelligent.

Also the difference of human and animal "intelligence" doesn't seem so big if we compare both. Planning, learning, etc. can be done by both, can we still say animals are stupid and humans are intelligent if they have the same abilities?

Make an human kind of AI, with consciousness and emotions as "curiosity, fear, pleasure, etc" is super hard. Mainly because all those emotions are already coded by genetics and develope in a small part by experiences.

Afaik shame is an emotion which childs don't have. They develop it just before or during puberty. Could it be that emotions are learned instead of being "pre-programmed"?

Link to comment
Share on other sites

.

A prime example for that is Japanese which is very context heavy:

Your friend asks: Do you want to go home together?

You answer: [Yes, I want to] Go [together with you].

Your answer can be interpreted in a number of ways but it simply means that you agree to the proposal of you friend.

This is true in many languages, or a bow of the head, or in Japanese a male can low barely audible grunt with the same meaning. The algorythm is experience, in certain circumstances words are less desired. Not always true, but often true that in Japanes culture, compared to other Asain cultures talking in certain circumstances is shameful. Its part of the bushido to avoid conflict. If a machine knows the rules about conflict avoidance then it can impathize with the speaker and respond accordingly. Somewhere in the childs life there has to be a guardian that reinforces the ethic, so that somewhere in the computers programming or learning it also has to be shown, IOW, if you want to match computer with human, at least once you have to raise it as a child, which means the computer needs to test its parents for social boundaries.

As for babies minds, the human cerebral cortex is largely demylienated neurons, they are not preprogrammed. Traumatic injury or ablative surgury to an infants brain has far less impact than on an adult, the infant mind is very plastic. If you are arguing that taking 2 identical twins and raising them in exactly the same invironment will priduce nearly identical personalities, this is true, but in two remarkably different environments there will be some similarities and alot of dissimilarities. One of the major observed differences between identical twins raised apart are epigentic changes that occur as a result of food and lifestyle.

think of the human brain as having alot of facilities, capabilities that make certain tasks easier. 20 years ago you would not think that humans could talk will holding a 3 x 5 thin solid and input 80 WPM and walking at the same time, but they can. At the same time see a decline in maintaining long term mating relationships has been starkly challenged across all cultures.

Link to comment
Share on other sites

Thus your examples are way way under researched and your understanding of the problems and systems is not deep enough to comment on the actual mechanical or mathematical functions.

Who's post are you responding to? If you're going to lob a barbed comment, can you please at least indicate who it is directed at?

Link to comment
Share on other sites

True - but my point about it still needing humans to carry out it's plans still stands. And I don't think it matters how reasonable something might sound, some people will automatically distrust it (and so not do it) if they know they're talking to an AI.

Ok, lets imagine that goverments or companies can control the AI they produce.

But progress does not stop there, in fact at this point is very accelerated, you can not control than someone else in the world from his garage will eventually create a Hard IA structure. The only way is having another Hard IA trying to avoid this. But for that, you need to give this Hard IA freedom.

Now the question would be, it will be possible to have a powerfull AI without consciousness to do this work? I guess is possible and it needs to be at constant upgrade because each time will be harder to stop it. Which we may go back to the main problem :S

By the way, I read again my later post, damm I make huge english mistakes, I hope do it better this time.

That's quite a hot topic.

Scientists and philosophes are discussing for hundreds of years if a baby's mind is "pre-programmed" in the womb or not. So the question is if visual recognition of things has to be learned or is the ability already there.

If we apply that to AIs we get the question if they should be pre-programmed with a few rules and/or knowledge or not. What's you opinion about that?

Afaik shame is an emotion which childs don't have. They develop it just before or during puberty. Could it be that emotions are learned instead of being "pre-programmed"?

Those are important questions.

Genetics by it self include: the base structure, so the brain can learn and function, then memories, instant reflex (that might be located in the brain or in the same nervous system), behavior and emotions.

For example It was found than mice might inherit memories to their offspring.

We know that our behavior and emotions had a true genetic base, evolution give us the key features to help us to survive.

When we need to be selfish or altruistic, when to feel love, when to feel empaty or anger, when we need to be aggresive or feel fear.

Then experiences helps us to develope even more those emotions and behavior. But there is always a genetic root than guide us, in the same way this genetic ANN helps Mario to survive

So if we can create an AI with that info already coded/trained in their ANN structure, then we may have less chances to make an evil AI.

It just came into my mind: If we can't come up with a definition of "intelligence" could it be that there's no such thing?

Not I guess the definition of intelligence is not hard, the one that is super hard is conscience. Stills nobody really knows what it is. We may be able to create it before understand it.

In my computer science studies I was always surprised what other people call "intelligent" algorithms. Usually the algorithms are so simple that I call them cleverly designed but not intelligent.

Also the difference of human and animal "intelligence" doesn't seem so big if we compare both. Planning, learning, etc. can be done by both, can we still say animals are stupid and humans are intelligent if they have the same abilities?

But the problem is your definition of intelligence.. You take human intelligence as base, then those who approach are intelligent, those who dont, are not. But then you need to make another definition to describe a circuits, algorithms or a cockroach.

Why some cars are called intelligent? Or the intelligence house? when in fact the only that some houses does is to turn on a light when is night or when someone is close.

Intelligence is anything that depending some inputs produce an outcome. Then you can calculate the level of intelligence in base how many inputs and outcomes produce and how accurate they are. Then you may have a formule that goes from intelligence = 0 to infinite.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...