Jump to content

A.I. Thoughts?


Melon kerbal

Recommended Posts

2 hours ago, Bill Phil said:

But you can code in instincts. Really strong ones, too. That can prevent it from surpassing humans, and make it not go against us, and make it enjoy working for us.

You can imprint some instincts, yeah.. "no coded".. because those instincts will be in a ANN structure, and the only way to set those instincts is training the ANN with all the sensor inputs and looking for the desirable outcomes, before install the artificial conscience.

That might help to guide the AI evolution with a higher grade of human feelings.
But there is a problem with all kind of ANN progress, every time that it needs the human guide or intervention to learn, this makes all the learning process much slower and limited.  So others AI researches may accomplish better results skipping that step.

Furthermore, an ANN does not need a conscience to be powerful, this was explained by the google team, if they remplace the search algorithm for a big ANN, then weird things may happen, the system will learn to find any kind of variable that help it to give you the best results according to your search desires. It may find that the time you take to write each letter may be a good indication to improve the search, or the way you look in your facebook picture, or it may control all the info that reach you in a subliminal way which force you unconsciously to search something that it can predict with higher accuracy. 

This mechanics might become so powerful that it might control the whole world to certain degree that we would be working for ir without knowing; neither the system because it does not have a conscience, is a blind mechanism. 

 

Three1415:  I did not saw your post, I will answer you tomorrow.

Edited by AngelLestat
Link to comment
Share on other sites

23 hours ago, Three1415 said:

After much thought on this matter, my conclusion was that the only way to ensure our success here is to implement strict safeguards requiring human input on all actions undertaken by the A.I, as well as to give it a directive to not only improve its own intelligence generally but also to further its understanding of humans and their desires. If the majority of humans had their minds "linked" and available for data output to this superintelligent A.I., and it could only act on directives approved by some substantial percentage of the human race (we would not necessarily be required to give input directly; it could likely just scan our memories, personalities, and general experiences and extrapolate a conclusion as to what each of us individually would prefer), then it would essentially be the effector of the will of the human race as a whole, which would presumably be generally altruistic (negative actions will harm part of the constituency by definition, and as such will be selected against) and would thus ensure that the A.I. maintained positive, human-oriented morals. 

But even in that case, progress will continue..  and at that point we are just years or days away to know everything that can be know, we (and the AI) would not need experimentation to advance in science, is possible to discover things just with deduction and logic. We dont need to explore all the universe to know what contains. We can run a simulation that it would tell us all possible things that we can find with their current chance.

So not sure what we can find when we discover everything.. but I know that even if many other doors open, at that accelerate progress, we would not take much more time to know everything.  And that day... our purpose will finish.  What is the funny to keep "living" once you know everything?   We "as humans", we live because we have a purpose coded in our ADN, we also have so much to discover "if we keep our mind as the ultimate limit for knowledge and intelligence".  This final seems even discouraging for the super AI.  The lack of purpose or challenges..  At that point you can do wherever you want, so there is no point to do nothing.

Maybe this is the answer for the fermi paradox.. all species that reach this point, they will reach the ultimate god status in a very short time, and then its purpose ends. I hope to be wrong..

If I am correct, then our best chance.. is to make a powerful ANN without conscience with the goal of control us to not keep advancing in tech (mostly ANN), to keep using our tools that we have today to explore and solve our problems.  That seems an experiment that it might fail eventually... but is the only thing that I can think off.

Link to comment
Share on other sites

16 hours ago, AngelLestat said:

...  This final seems even discouraging for the super AI.  The lack of purpose or challenges..  At that point you can do wherever you want, so there is no point to do nothing.

Maybe this is the answer for the fermi paradox.. all species that reach this point, they will reach the ultimate god status in a very short time, and then its purpose ends. I hope to be wrong...

^ That is an interesting position!  

I know it is foolish to imagine what anyone's thoughts might or could be at that point (the point when we have learned everything there is to know).  But one thought I have right now in my very corporeal being is that we could still explore and expand our self, and ourselves.  I am imagining myself when the knowledge exists to replace all my worn out cells, to expand my physical capabilities, to turn myself into a self-sustaining spaceship that can travel at significant %s of C.  I would gather some like-minded souls and go travel and explore.  I wouldn't care that it would take tens of thousands of years to get somewhere.  I could hibernate the part of my mind that couldn't handle that sort of thing.

Like right now, just because I know that Rio de Janeiro (for example) exists, and that I could go there, it doesn't mean that I should just do nothing about it and not go.  It is FUN to go and explore that which you already know to exist.

Just a random thought.

Edited by justidutch
Link to comment
Share on other sites

yeah is foolish to make any prediction at that stage, but not sure if a Río de Janeiro logic works..  Because there are other motives and variables that you are not taking into account on "why that looks to us like a good idea now."  Because there is a real difference between see something for TV than experience your self, but at that stage would not be any difference.
A better example I imagine will be play a game.. like GTA5, if you already accomplish everything and you saw all videos of people doing everything..  there is not much fun left..  Is like you are in those strategic games fighting against a computer, and when you accomplish to have a good defence and be out of danger..  is not fun anymore to continue the match because there is not challenge..

The same will happen (I guess) at that stage of progress.
That is also why rich people with lack of new goals, challenges or with low accomplishment in the year, is less happy than a poor who had some minor accomplish in that year. 

What is funny is not reach some place.. is how you get there. Game logic.

Edited by AngelLestat
Link to comment
Share on other sites

To get a true AI with (most) human characteristics would be daunting from a computer programmer's standpoint.  Also a hardware engineer's standpoint.  And more.  Anyway, people become who they are from experiences, meaning that the system would have to either modify its own programming, or store the information to be able to have any response to something.  If you touch an outlet and get shocked, you're less likely to touch it again, as you now remember it as a danger.  This also means that it would have to understand danger and self protection, along with self sacrifice, which would also come to the emotion part.  Everything becomes a huge web of things that need to be done, and whenever a part is added, two more need to come along.  I personally believe that there is no such thing as a TRUE AI.  Of course you can program computers to sort things in different ways, but that's just giving it information and what you wanted it sorted by.  It's just not the same.  Computerphile has a lot of really good videos about AI.

Link to comment
Share on other sites

9 hours ago, CliftonM said:

To get a true AI with (most) human characteristics would be daunting from a computer programmer's standpoint.  Also a hardware engineer's standpoint.  And more.  Anyway, people become who they are from experiences, meaning that the system would have to either modify its own programming, or store the information to be able to have any response to something.  If you touch an outlet and get shocked, you're less likely to touch it again, as you now remember it as a danger.  This also means that it would have to understand danger and self protection, along with self sacrifice, which would also come to the emotion part.  Everything becomes a huge web of things that need to be done, and whenever a part is added, two more need to come along.  I personally believe that there is no such thing as a TRUE AI.  Of course you can program computers to sort things in different ways, but that's just giving it information and what you wanted it sorted by.  It's just not the same.  Computerphile has a lot of really good videos about AI.

You are talking of AI using normal coding (that belongs to the past)..   The new learning machines are not computers and they work the same way as our neurons works. You can also emulate learning machines (Neural networks) with computers..  Which you have the same outcome but is not as efficient as it could be. All those things that you might think it will take a long time to figure out, mostly all them arise alone from a neural network structure.  

Link to comment
Share on other sites

18 hours ago, CliftonM said:

 If you touch an outlet and get shocked, you're less likely to touch it again, as you now remember it as a danger.  This also means that it would have to understand danger and self protection, along with self sacrifice, which would also come to the emotion part.  Everything becomes a huge web of things that need to be done, and whenever a part is added, two more need to come along.

This "web of things" is already invented and it is called semantic web. The idea is that there are "concepts" (= things) that are set in relation to each other (= usually possession or identity). This simple idea can describe all kinds of information, no matter how abstract they are.

For example: Dumbo is Elephant. Elephant (has Leg) x 4. Elephant has (Color is Grey).
Question: Thing (has Leg) x4. Thing has (Color is Grey). Thing is ?

If the system is programmed in a way that it assumes that it knows everything (closed world assumption) it will answer "Elephant".
If it is programmed in a way that it assumes it doesn't know everything (open world assumption) it will answer "Elephant OR unknown".

Google search uses a semantic web to gather the information at the right of the search results list. I believe Google is the market leader in that technology atm.

 

8 hours ago, AngelLestat said:

The new learning machines are not computers and they work the same way as our neurons works.

It may be nitpicking but that is actually wrong. Sure ANNs are modelled after NNs but they are only simplified models of the real thing. Organic neurons are pretty difficult to simulate because they are very complex.

 

In my computer science studies I also had several lectures about AI and intelligent systems. In my opinion for a hard AI we need four good designed systems:
1) perception (what do I sense?)
2) interpretation (what is it what I sense?)
3) evaluation (using the interpretations to make a description of what's going on)
4) planning (extrapolation of the current situation into the future, finding an appropiate reaction, defining a goal)

Currently we can do 1) very good. 2) is so-so. 3) is impossible. 4) we still have no clue because we can't do 3).

Let me explain 3).
Evaluation here means that a system tries to group and categorize information. To use the example from before: It tries to understand a thing that is grey and has four legs. The problem is that there are numerous things that are grey and have four legs (cats, dogs, mice, etc.). The system can never be sure of what it is seeing. You can add more and more sensory data to make better guesses. But you need almost an infinite amount of data to be sure.
The next problem is that it has a lot of problems with new and contradicting data (an elephant with 3 legs?). The system can calculate a probability of how sure it is about a thing. But probabilities have the problem that you can't really rely on them. They are only a better way to guess and that's all.

These problems persisted for decades and nobody found a reliable solution yet. And it doesn't look like someone will come up with something anytime soon.

Edited by *Aqua*
Link to comment
Share on other sites

I am not in position to answer many of those, not sure if even neuroscientist can answer all.  But I know few things.
They discover that an ANN show the same signal outcomes hearing different sounds than a Cat brain NN to the same sounds.
It means both learned in similar ways.

They also did connections between real NN and ANN and it work as a normal NN or ANN (with their analog to digital conversions)
They put real NN brain cells from cockroaches into a little robot, and they found that the robot behave with respect light in a similar way than a cockroach.
Mostly all the increase of breakthroughs in neuroscience are after the increase knowledge in ANN.
There are some differences but is not related to neural connections or training.. is more related to general mechanism on the biologic brain.. The brain has different waves (alpha, beta, gamma, zeta) that work simultaneously and they control the growth of synapses or the strength of impulses or different modes that are related to biological mechanism..  They control in some way the synchrony in the signals (the same as the cpu clock) and they may be also responsable for other things we dont know.

About some of the things you mention on the elephant answer..  that can be solve just with normal ANN.  But it needs to be a complete ANN, not just one using data of pictures, or words, it needs to be a general ANN with many inputs. It will learn to figure out that an elephant always come with 4 legs.

What I am kinda sure, is that an ANN emulates the inconscient behavior, all our answers come from inconscience, but there is something still missing, how conscience arise?, how it plans?....  etc.    

Another evidence how fast is this growing, from this list of biggest breakthrough in technology from 2015, 5 of them are related to ANN OR NN.
http://www.scientificamerican.com/article/top-10-emerging-technologies-of-20151/

This is another good neuroscience site:

https://theconnectome.wordpress.com/2015/07/02/the-top-5-neuroscience-breakthroughs-of-2014/

Link to comment
Share on other sites

5 hours ago, AngelLestat said:

Mostly all the increase of breakthroughs in neuroscience are after the increase knowledge in ANN.

Which isn't surprising because an ANN is an abstract model of a real NN. ;-)

 

5 hours ago, AngelLestat said:

About some of the things you mention on the elephant answer..  that can be solve just with normal ANN.

Nope. It can't really do that. Except you want an AI which makes errors.

ANNs are very good at patter recognition but that's it. For everything else there are much better and efficient algorithms. For example for learning behaviors an evolutionary algorithm learns much faster as an ANN and the results can be much more complex.

The idea is behind an evolutionary algorithm is that you take a piece of code or properties of an "entity" an modify them. Then you let it run wild and measure how good it performs. Then you modify it and let it run again. If it now performs better this "entity" will be chosen for the next generation. If it performs worse it'll be "killed". Now if you have several of them, each of them different, a near optimal "entity" will arise after several "generations".

An example for an evolutionary algorithm is here: http://math.hws.edu/eck/jsdemo/jsGeneticAlgorithm.html
The red things are supposed to find the green food. The more efficient they are at it the more likely it is they will survive. Just let it run for a while and you'll notice that they get better and better.

This one is more visual. The goal is to drive as far as possible. Notice how the vehicles become more and more like cars.

I believe evolutionary algorithms will be the core of a hard AI.

Link to comment
Share on other sites

Yeah I know them, but one of many ways to train an ANN is with genetics algorithms like its shown in this video:

 

About deep learning mechanism being unable to answer that.. As I said, I imagine that is because the ANN model used just focus in one or at least two aspect from these: (images, or semantic, or language, or arithmetic, sounds, etc) .  In deepmind they study many of these different training methods. And I saw incredible demos showing the power of understanding. But If you make the same ANN with many inputs from a microphone, a camera, and internet with another ann to understand arithmetic and language semantics. It would be able to learn that an elephant always come with 4 legs unless it is an injured elephant.
Because is just how information is related.. until certain training the ANN should be able to make that connection.   

 

Link to comment
Share on other sites

On December 30, 2015 at 3:30 PM, Melon kerbal said:

I hate it when films like terminator always follow the 'Evil AI' plot.In reality the programmers would probably create a failsafe. Not that I don't like terminator,it's just common sense to program a failsafe.

An AI cannot and will not do anything dangerous like that unless these conditions are present:

1. The robot was programmed and designed with those capabilities and skills in mind

2. The robot has the capacity to learn and remember, in which it must experience combat or killing. With that said, it can only fight the way it is taught.

3. The robot has an AI capable of conscious thought, where it must then have capability to express emotion to justify such action, but it would still require knowledge of how to do such action. So therefore it must have capabilities of 1 or 2.

4. The robot must have capability to connect and actively communicate with the Internet, but must have capabilities of 2 or 3.

As you can see, the robot must have certain capabilities and functions that can accommodate those kinds of actions, such as thought capacity, motor skills programmed for such tasks, internet connectivity, and conscious thought.

A robot programmed linearly to do s single task cannot do such things, as they have a single chore list to complete. Give it conscious thought, and it CAN make its own chore list.

Edited by SpaceToad
Link to comment
Share on other sites

1 hour ago, AngelLestat said:

But If you make the same ANN with many inputs from a microphone, a camera, and internet with another ann to understand arithmetic and language semantics. It would be able to learn that an elephant always come with 4 legs unless it is an injured elephant.

I have to correct you. You can only train an ANN to detect patterns. That's why it is useful for video and audio analyses because you are searching for patterns in there. However it can not for example extract new information from known one (one of the strong points of a semantic web) and neighter can it "think". It's essantially a set of functions, one for each output neuron. Therefore it can not "understand". There's no intelligence in there, the only intelligent thing is the trainer who forms the net as he wants it.

An ANN can only be a part of a hard AI. Self-awareness, "thinking", etc. must come from elsewhere.

Link to comment
Share on other sites

Let's say you have an ANN that specializes in detecting patterns , perhaps in video.  Or perhaps instead of a single one you get a bunch of them that all work slightly differently, networked together to form a consensus.  Then network those with audio specialist ANNs.  Then add in other specialized ANNs that simulate all the human senses. Could there be some sort of critical tipping point if you form a big enough collective (Borg anyone?) that could give rise to a form of sentience?

Link to comment
Share on other sites

If you link several ANNs you get a single big one. ;) 

 

1 hour ago, justidutch said:

Could there be some sort of critical tipping point if you form a big enough collective (Borg anyone?) that could give rise to a form of sentience?

Unknow but it's likely it won't be like that.

A human brain has about 86 billion neurons, an elephant's brain has about 257 billion neurons. That tells us that just having a big neuronal network isn't enough. There must be more to it and whatever that is, it's probably the source of our sentience.

Some recent (3 years ago?) research discovered that there are clusters of a couple 100 - 1000 neurons in our brain. It is still unknown what effect that has. May that's where our intelligence comes from.

Link to comment
Share on other sites

Physical-ism: also known as "materialism" but that is confused with a want of material things. This states that everything is physical, and if we can't detect it in anyway, it does not exist, combined with the theory of mind this means that all we are is a collection of neurons and synapses and hormones, no spirit or soul, if this is true then making an artificial mind equal to or better then the real thing should be possible. In short the creation of AI will prove we have no functional soul (that does not mean we have no soul, just that such a thing is useless for our consciousness.

Need for Desire: without any desires no matter how intelligent a being is it will just sit there and do nothing, completely catatonic. Desires must be constantly driving it to do things. If it desires to see humans smile, and does not like seeing us frown, it will want to please people, then again it might just chisel smiley faces into everything. A desire like "please humans" or "obey orders" would be very complex and require a concert of smaller simpler desires.

Reinforce in future generations: if we get a good working AI and it is benevolent and smarter then us, we need make sure it makes is successors as benevolent or more so then its self, it must reinforce and improve on its own obedience desires into its successors, this would require we trust it to do so and that if it finds loop holes and paradoxes in its obedience protocols it will fix them in its successor.   

Link to comment
Share on other sites

On 12/1/2016 at 1:45 PM, *Aqua* said:

I have to correct you. You can only train an ANN to detect patterns. That's why it is useful for video and audio analyses because you are searching for patterns in there. However it can not for example extract new information from known one (one of the strong points of a semantic web) and neighter can it "think". It's essantially a set of functions, one for each output neuron. Therefore it can not "understand". There's no intelligence in there, the only intelligent thing is the trainer who forms the net as he wants it.

An ANN can only be a part of a hard AI. Self-awareness, "thinking", etc. must come from elsewhere.

there are ways to shutdown the conscience (pressing in certain part of the brain), or to measure what responses comes from inconscience. Scientist find that all responses are generated by inconscience and then the brain makes the illusion to make you think you choose that answer in a conscious way. They achieve that, knowing before the test subject, what it would be his/her answer some short time before he becomes conscious of his/her answer.  I think that inconscience is just the answer that a complex ANN trained in patterns gives you. Then you might have the conscience as the driver that "guide" in some way your choices.. but we don't know what is conscience yet, or if is just a manifestation of inconscience.   Or maybe someone knows..
About intelligence, you are using a difficult way to define intelligence, I guess we already have discuss about this, there is a video of michio kaku in which define what is intelligence in a very practical way.  Because the time when you cross a line between human vs other animals, or human and chimps and dolphins vs all animals..  it does not have sense..  It makes you believe that there is a physical difference that it triggers or not.. and the main difference may be just incremental. 
 

10 hours ago, justidutch said:

Let's say you have an ANN that specializes in detecting patterns , perhaps in video.  Or perhaps instead of a single one you get a bunch of them that all work slightly differently, networked together to form a consensus.  Then network those with audio specialist ANNs.  Then add in other specialized ANNs that simulate all the human senses. Could there be some sort of critical tipping point if you form a big enough collective (Borg anyone?) that could give rise to a form of sentience?

I guess nobody can answer that yet..  our best choice still is try to divide all problems in very small problems until reach the most basic principles, and then start from there.  Because in its complexity, brain see it from a distance is indecipherable, the same that try to know how a complex ANN reach its answer.. it can not be followed.
That is the barrier.. know if conscience is one thing that works with a different mechanism or not. Or if just arise from complexity.

8 hours ago, *Aqua* said:

A human brain has about 86 billion neurons, an elephant's brain has about 257 billion neurons. That tells us that just having a big neuronal network isn't enough. There must be more to it and whatever that is, it's probably the source of our sentience.

Some recent (3 years ago?) research discovered that there are clusters of a couple 100 - 1000 neurons in our brain. It is still unknown what effect that has. May that's where our intelligence comes from.

maybe.. but also the elephant has a bigger body, so it has more cells or terminal nerves to control. Also, sometimes an animal might look dumb, but it may be using the brain in a different way that us. Elephants had a great memory, they may be have also quite a lot of genetic memory, but is hard to prove it from our perspective.
 

2 hours ago, RuBisCO said:

Physical-ism: also known as "materialism" but that is confused with a want of material things. This states that everything is physical, and if we can't detect it in anyway, it does not exist, combined with the theory of mind this means that all we are is a collection of neurons and synapses and hormones, no spirit or soul, if this is true then making an artificial mind equal to or better then the real thing should be possible. In short the creation of AI will prove we have no functional soul (that does not mean we have no soul, just that such a thing is useless for our consciousness.

Need for Desire: without any desires no matter how intelligent a being is it will just sit there and do nothing, completely catatonic. Desires must be constantly driving it to do things. If it desires to see humans smile, and does not like seeing us frown, it will want to please people, then again it might just chisel smiley faces into everything. A desire like "please humans" or "obey orders" would be very complex and require a concert of smaller simpler desires.

Reinforce in future generations: if we get a good working AI and it is benevolent and smarter then us, we need make sure it makes is successors as benevolent or more so then its self, it must reinforce and improve on its own obedience desires into its successors, this would require we trust it to do so and that if it finds loop holes and paradoxes in its obedience protocols it will fix them in its successor.   

My only concern is not about if they are bad or good..  my concern is with unmeasured progress. Not sure how that can be a good thing.. we can not even enjoy certain discovery or use it, because in a very short time it will be another that will make pointless the first one.

Not sure how an IA can defend us and it self from that destiny.

 

Link to comment
Share on other sites

2 hours ago, AngelLestat said:

My only concern is not about if they are bad or good..  my concern is with unmeasured progress. Not sure how that can be a good thing.. we can not even enjoy certain discovery or use it, because in a very short time it will be another that will make pointless the first one.

Not sure how an IA can defend us and it self from that destiny

and that is why it is called the singularity. No one knows what down that hole, and frankly the only thing that will even be able to comprehend what is at the bottom won't be human. Can we trust that? Well do cats trust pet owners?

Link to comment
Share on other sites

@AngelLestat
What do you mean with "inconscience"? I can't find a translation of it. Do you mean unconsciousness?

 

5 hours ago, AngelLestat said:

but we don't know what is conscience yet

That's true. We also don't know if it's a necessity for an intelligent mind.

 

5 hours ago, AngelLestat said:

About intelligence, you are using a difficult way to define intelligence

I think my definition is as good as everyone's. There's no definition of what intelligence is.

 

5 hours ago, AngelLestat said:

but also the elephant has a bigger body, so it has more cells or terminal nerves to control.

There are a lot of animals where the brain/body ratio is greater compared to humans. For example birds, dolphins, etc. Again, intelligence doesn't appear to be proportional to the brain size. There might be a correlation though.

 

I once made an experiment with my dog. I let him see me putting some tasty stuff on the ground and covered it with a piece of cloth. Then I allowed him to try getting that food. First he tried to paw through the cloth but was unsuccessful. Then he looked at me like he wants me to remove the cloth. Then he changed his approach and pulled the cloth up (obviously in a funny way, he had paws, not hands) and was now able to reach the food.

Was the dog intelligent? Yes. Is his intelligence comparable to ours? Yes. (He tried different approaches, he evaluated the results of the approaches, etc.) Can his intelligence compete with ours? No. (He seems to think "slower", he doesn't seem to think ahead a lot, etc.)

The brain of a dog is pretty small. I fits on the palm of a hand.

Edited by *Aqua*
Link to comment
Share on other sites

 

On 14/1/2016 at 5:58 AM, *Aqua* said:

@AngelLestat
What do you mean with "inconscience"? I can't find a translation of it. Do you mean unconsciousness?

Yeah that, unconsciousness.  Sometimes I mix my language with english.

Quote

I think my definition is as good as everyone's. There's no definition of what intelligence is.

For me starts from the level zero and then goes up. I will call intelligent to a light that turns on when a photosensitive cell detects no light. And is not just me, people is used to call these things that way.. "The intelligent car",  some animals do similar basic stuff, if they see a shadow over them, they move (in case is a predator), those are just few neuron interconnections, if we add more and more..
I guess it will reach the point when you are not able see who is more intelligent, you or "it".

Quote

There are a lot of animals where the brain/body ratio is greater compared to humans. For example birds, dolphins, etc. Again, intelligence doesn't appear to be proportional to the brain size. There might be a correlation though.

yeah is not.

Quote

 

I once made an experiment with my dog. I let him see me putting some tasty stuff on the ground and covered it with a piece of cloth. Then I allowed him to try getting that food. First he tried to paw through the cloth but was unsuccessful. Then he looked at me like he wants me to remove the cloth. Then he changed his approach and pulled the cloth up (obviously in a funny way, he had paws, not hands) and was now able to reach the food.

Was the dog intelligent? Yes. Is his intelligence comparable to ours? Yes. (He tried different approaches, he evaluated the results of the approaches, etc.) Can his intelligence compete with ours? No. (He seems to think "slower", he doesn't seem to think ahead a lot, etc.)

 

2 days ago I have a similar dog "intelligent" experience with my neighbour´s dog.
She told me that the other day in my park, the dog was looking for some animal, I knew about this but I never found it. Then she tells the dog.. "go.. help Ariel", without signs.
The dog makes all the way until my house (go around the limits until reach my park door, it waits until I open. Then he enters but it doesn´t know what to do, so look the owner and she tells him.. search the "coipo", show it to Ariel..   And he does that. :)
 By the way.. the dog knows me.. but not much... But it may be realized between all possible things his owner may be asking..  the one with higher chances of  being right.. was that.
Because his NN was already trained and that was the result outcome under those inputs.
That is how ANN works. And I guess a dog knows that an elephant should come with 4 legs.. if they have 3, they see that as a weakness to attack.
Why it knows?  because is a pattern.. that is an animal, and all animals are symmetric and most of them has 4 legs. That is something that a big ANN would not have difficult to figure out.

Edited by AngelLestat
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...