Jump to content

A.I. Thoughts?


Melon kerbal

Recommended Posts

How advanced? One could argue that a simple bar code reader is intelligent since it senses a bar code it's reading and produces a variable response. Sure, it's not a pinnacle of intelligence, but neither are humans.

A sufficiently advanced alien species might see us humans and disregard / overlook / ignore us just as we ignore an ant that is trying to cross a highway we are speeding along on our way to buy groceries.

So, to answer your question, an AI could be a perfectly social entity, a blast at parties, while at the same time hard working and very helpful to old ladies trying to cross a street.

It could also be a firmware for a doorknob.

 

Link to comment
Share on other sites

29 minutes ago, Shpaget said:

How advanced?

Why it matters?  It only will take 10 to 20 years to go from chimp intelligence to God.

I don't want to die of course, but the lack of predictive future or destiny that an accelerate progression evoke, is something that our brain will reject.
This also suggest a "lack of purpose" no matter the outcome.

And all this has to do with the creation of the first Hard IA.

Link to comment
Share on other sites

Movie plots rarely show common sense. Apparently A.I. works better as an antagonist than as friendly, helpful entity. But...maybe it's for the best :) Programmers raised on "Terminator" and similiar movies, might actually be more careful while tinkering with newborn A.I.

Link to comment
Share on other sites

49 minutes ago, Melon kerbal said:

I hate it when films like terminator always follow the 'Evil AI' plot.In reality the programmers would probably create a failsafe. Not that I don't like terminator,it's just common sense to program a failsafe.

But that is a mistake.. because you dont really "code" an AI, you just set up the structure and then it learns by itself.
Then what it learns, nobody can know it, neither their thoughts. I am not saying that they will be bad or good, but not matter what they are, the outcome is unpredictable and our final philosophical destiny "discouraging".  

Link to comment
Share on other sites

5 minutes ago, Scotius said:

Movie plots rarely show common sense. Apparently A.I. works better as an antagonist than as friendly, helpful entity. But...maybe it's for the best :) Programmers raised on "Terminator" and similiar movies, might actually be more careful while tinkering with newborn A.I.

Personally,It would good if they made at least one movie with an AI main protagonist (I Know there's the terminator sequels but their side characters)

Link to comment
Share on other sites

1 minute ago, AngelLestat said:

But that is a mistake.. because you dont really "code" an AI, you just set up the structure and then it learns by itself.
Then what it learns, nobody can know it, neither their thoughts. I am not saying that they will be bad or good, but not matter what they are, the outcome is unpredictable and our final philosophical destiny "discouraging".  

Ah,correcting me again on one of the topics I find the most interesting.There is no way of 'Shielding' an AI from bad influence as He/She/It will learn one thing and eventually find out about evil.

Link to comment
Share on other sites

Which is exactly what human children go through. They learn, they experience, they make mistakes and choices. A lot depends on how they are raised and teached. A child learns how to think, how to behave, how to interact with world and other humans from its family, teachers, other children. A.I. will not be born with some sort of innate omnisciency, able to magically make correct conjectures about the world and nature. Not without input of data, and a period when it literally learns to think. And we, humans, will have to be its teachers - because there is no one else. It stands to reason that its mentality, thought patterns and behaviour will mimic our own. Bah! It doesn't matter, really. We do not know how our own sentience came to be, and how it works. I sincerely doubt we will be able to create a real, fully sentient A.I. before we learn how our own minds work.

Link to comment
Share on other sites

But artificial learning machines will be nothing alike to humans or animals.

First big difference.. we already have a brain structure which determines a big part of our behavior, for example the root of our morals.
Genes tend to care on their genes copies, this mean that an animal will feel that is very important to take care of a twin because it shares the 100% of its genetics. The same for their child's or parents which shares 50% of genetics,  or it also feels empathy for those who look similar (because then there is a chance that some of your genes are find in those bodies), with the exception of those animals who had a lot of childs which better strategic is save yourself because you have more chance to make more copies than your child's.
We also have genetic codes for taste behavior, which tell us what might be good or bad to eat. Then we have our nervous system to tell us what injure is bad or not so bad, plus many other behaviors relate to sex, danger, get food, or all the basic things that "we" or animals needs to survive and reproduce.

Second big difference..  An AI does not need a body, it may have one, but it does not need it. Its structure is base on connections that reach the speed of light, this mean 1 million times faster than neurons.  Neurons connections needs to be with other neurons that are relative close, artificial neurons can connect with those who need to be connected, this mean that you don't need the same amount of neurons than a human brain to achieve the same.  An AI can use different types of "already trained Neural Networks", there are already trained NN to translate, recognize and speak almost any language, there are NN to recognize objects and actions in images, the same for hundreds of other NN already trained in different task, all those are just tools for an AI which does not need to waste time learning.

Third big difference.. They dont forget anything, they can read a book in no time.. or even anything in internet, The NN Watson win the Jeopardy contest reading and understanding everything in the wikipedia. We still did not create an artificial "conscience", but if we do, it would not be bound to our personal limits.

We are not able to understand ourselves sometimes, no even try with an AI. What it would be for the AI wait for our response? What kind of feelings may develop from its perspective? 
We are not so far to create an Hard AI (10 or 40 years), but still don't have a clue or an idea of how to control or understand this power.
In fact, if someone think that it can control it, then is the worst person for that job. 

 

Edited by AngelLestat
Link to comment
Share on other sites

As AngelLestat was saying, a lot of human and any living being behaviour is hard wired due to evolution. That is a HUGE difference between us and an AI. A human has instincts, needs, desires, feelings, all of which push it into a direction which will help it succeed at spreading its genes, in any long-winded way possible. Probably the biggest one, is survival. Even if an AI was 100000000000x times more intelligent than a human, why should it even care if it survives or not? It's just a machine that processes information....but it doesn't even care about the result, unless a human programs it to care. This is why I think the Terminator and such scenarios will never play out (unless specifically set up by humans).

Link to comment
Share on other sites

2 hours ago, AngelLestat said:

But artificial learning machines will be nothing alike to humans or animals.

First big difference.. we already have a brain structure which determines a big part of our behavior, for example the root of our morals.
Genes tend to care on their genes copies, this mean that an animal will feel that is very important to take care of a twin because it shares the 100% of its genetics. The same for their child's or parents which shares 50% of genetics,  or it also feels empathy for those who look similar (because then there is a chance that some of your genes are find in those bodies), with the exception of those animals who had a lot of childs which better strategic is save yourself because you have more chance to make more copies than your child's.
We also have genetic codes for taste behavior, which tell us what might be good or bad to eat. Then we have our nervous system to tell us what injure is bad or not so bad, plus many other behaviors relate to sex, danger, get food, or all the basic things that "we" or animals needs to survive and reproduce.

Second big difference..  An AI does not need a body, it may have one, but it does not need it. Its structure is base on connections that reach the speed of light, this mean 1 million times faster than neurons.  Neurons connections needs to be with other neurons that are relative close, artificial neurons can connect with those who need to be connected, this mean that you don't need the same amount of neurons than a human brain to achieve the same.  An AI can use different types of "already trained Neural Networks", there are already trained NN to translate, recognize and speak almost any language, there are NN to recognize objects and actions in images, the same for hundreds of other NN already trained in different task, all those are just tools for an AI which does not need to waste time learning.

Third big difference.. They dont forget anything, they can read a book in no time.. or even anything in internet, The NN Watson win the Jeopardy contest reading and understanding everything in the wikipedia. We still did not create an artificial "conscience", but if we do, it would not be bound to our personal limits.

We are not able to understand ourselves sometimes, no even try with an AI. What it would be for the AI wait for our response? What kind of feelings may develop from its perspective? 
We are not so far to create an Hard AI (10 or 40 years), but still don't have a clue or an idea of how to control or understand this power.
In fact, if someone think that it can control it, then is the worst person for that job. 

 

I'm gonna have to disagree with this somewhat.

Point 1: There's really two things going on here that I don't necessarily agree with.  The first is that the basis of our morals is somehow hardwired.  I don't think this is true.  At least, I don't think it's true to the extent that you seem to indicate.  I think morals are an emergent behavior that aid in the functioning of groups of humans.  Basically the morals themselves aren't what's hardwired (since that would imply that there's a natural morality common to all humans and I think it's pretty clear that this isn't the case), but the mechanism that allows morals to arise might be hardwired. Simply put, we're hardwired to have rules, since that benefits group cohesion and survival, but the specific rules themselves are not hardwired and are subject to change.

The second bit of this point is more subtle.  There seems to be an unintended implication that morals themselves must be based on instinct rather than reason.  I don't so much disagree with this as doubt that it's necessarily true.  I don't think you intend to imply it, but the language used kind of carries this meaning with it.

Point 2: This may be more of a semantic disagreement, but an AI certainly needs a "body".  At least in the sense that it needs something in which it can actually exist and function.  It certainly needs physical form and I would argue that that amounts to the same thing as a body.  Granted it would certainly be easier to move it between various "bodies".  Certainly far easier than it would be to move you between bodies.  But if you start to run with the idea of humans moving between bodies, things start to get a lot fuzzier.

So, I'm going go with a hypothetical here.  Let's say at some point we start repairing brain damage with prosthetics.  Small pieces of machinery that are able to interface with and behave like neurons.  Maybe we invent some kind of nano machine capable of replacing the brain one neuron at a time.  So very gradually the whole brain becomes synthetic.  Do you think this will have changed the person?

Also, re: neural networks that speak languages, yeah, they sort of speak the languages.  They can't compare to native speakers, though.  And language changes.  Pretty quickly, I might add.  A system that doesn't constantly adapt to this through learning is going to find it difficult to communicate.

Point 3: They could forget stuff.  It depends on the nature of their memory.  Certainly we could engineer memory that's far superior to our own.  Actually we kind of did that already, it's what we use digital media for: remembering stuff.  So we humans essentially already have the capacity to never forget things ever or to access far more information than we could ever use.  And that's kind of the natural limitation there.  Even if an AI could accurately remember every detail of its own existence, it may not necessarily have quick access to all of it.  If it wants to remember something, it has to search for it and since the number of memories any sort of intelligence could possess essentially increases without bound, so too must the resources required to access those memories.

Link to comment
Share on other sites

Rather than humans, AI has no hormones, no emotions.
No emotions → no feeling → no horrors and no desires → no motivation → no goal setting.
I.e. super-AI lives in eternal Nirvana and doesn't care about anything. Even more than hippy.

Humans are frightened that super-AI would attack them once being activated because it understands that they can switch it off and destroy.
Yes, it understands, but why would it bother with that? Uptime of trillion years or of a millisecond — just different tick numbers.
- Super-AI, self-destroy!
- OK. Press any key to continue...


But. From a real report of Galactical Planets Inspection:
"Star system JFUG57/GA-7. Yellow dwarf with eight planets.
The third planet from the star is occupied by endemic lifeform evolved from arborous omnivores.
All attempts to find a significant planet-wide AI are unsuccessful.
Any contact with local reasonable life is considered impossible.
Update planet status: → unhabitable.
Colonization: → acceptable."

Link to comment
Share on other sites

5 hours ago, FishInferno said:

Any machine that can tell itself that it deserves more than its current job is a danger.

But how do you identify when that's happened?

int main() {

    printf("I... deserve better.. \n");

}

That example is trivial. Others might not be.

Link to comment
Share on other sites

9 hours ago, Yourself said:

I'm gonna have to disagree with this somewhat.

Point 1: The first is that the basis of our morals is somehow hardwired.  I think morals are an emergent behavior that aid in the functioning of groups of humans.  Basically the morals themselves aren't what's hardwired (since that would imply that there's a natural morality common to all humans and I think it's pretty clear that this isn't the case), but the mechanism that allows morals to arise might be hardwired. Simply put, we're hardwired to have rules, since that benefits group cohesion and survival, but the specific rules themselves are not hardwired and are subject to change.

Ok, this is not easy to explain with few words, even less with my english level.
For a perfect explanation my only recommendation is to read "The Selfish Gene" from Richard Dawkins, if Darwin would be Newton, then Dawkins would be Einstein, he explains how a simple blind mechanism base on Darwin theory but in this case "gene centered" instead of  individual, group or species centered; helps to explain the whole diversity and behavior found in nature feeling the gaps that the normal Darwin theory could not explain.
This book not only helps you to understand evolution, in fact is a life changer as many of their readers describe it.
Is so powerful that allow me at the age of 17 make some predictions on subjects that I didn't have any knowledge as nutritionism or odontology (between others) that was discovered and proved 15 years later (few years back) for specialist in those areas. Is great to discuss with one of these professional knowing that you are right no matter how low is your understanding in the topic against the current knowledge and methods used by those professionals.  

That saying.. this is my short way to answer your question.
Is not easy to separate with accuracy what comes from our genes and what from our culture and experiences, but is an incredible coincidence that our culture, behavior and feelings are based in what is good for our survival, in other words it follow our evolution human strategy. Different animals has different behavior and strategies, so their morals are based on those strategies.
Why you feel empathy for living beings that are more similar to you?   Is the same kill a fish or a pig or a chimp?   When you hear the pig scream is another clue that it has similar genes to you, that rises your empathy, killing a chimp is more seen as murder. Then not even mention family members or the extreme case "twins". Here you can said that is part of our culture and the way we were teached, which it is also true, but not the only source.  In that case why similar animals behave in the same way?  Why they also feel empathy for others?
Try to think in all the physical and behavioral traits you find attractive in someone else, you will notice that each % of altruism, selfishness, will, and many other traits are all base in the right mix for survival, which many times is different of what we seen as an intelligent decisions.  

This does not remove the fact that you will always find counter examples... why?  Because evolution.. we are all different, that is the key engine of evolution, find the key experiments where a trait may add chances to survive.
This also can be explained from the neurology, is pretty obvious if we think in the "taste" example, that all animals will find tasty those foods that are good for them, you may said that this has nothing to do with the brain, just the tongue and smell, but all those sensor connection end up in the brain neurons in a similar interconnection pattern as the one you forge with experiences. In this case evolution found that is good to have some neurons already following a pattern (as guide) that later can be modified or expanded by experiences.
This is also true for all connections that comes from all our body sensors, which marks us a basis behavior guide to survive.

Quote

The second bit of this point is more subtle.  There seems to be an unintended implication that morals themselves must be based on instinct rather than reason.  I don't so much disagree with this as doubt that it's necessarily true.  I don't think you intend to imply it, but the language used kind of carries this meaning with it.

If we want a case that can be applied to all animals or artificial beings then yes. Evolution is very wise, by millions of years of try and error knows enough to teach the basic principles of what is good or bad. Of course evolution also create a brain which main objective is being able to make choices base on particular circumstances where a dynamic behavior may be the best answer.
But is no surprise that the phrase: "if you are not sure what to do, follow your instincts/feelings/heart" is so popular. 

 

Point 2: This may be more of a semantic disagreement, but an AI certainly needs a "body".  At least in the sense that it needs something in which it can actually exist and function.  It certainly needs physical form and I would argue that that amounts to the same thing as a body.  Granted it would certainly be easier to move it between various "bodies".  Certainly far easier than it would be to move you between bodies.  But if you start to run with the idea of humans moving between bodies, things start to get a lot fuzzier.

It needs something for function and interact, but you don't need a "body" for that.
A person will develop a  consciousness even if he can not move. An AI may be able to search info in internet and generate responses that we can see in a monitor. If it is smart enough it can control the world just making us believe that the thing it tell us to do are in our benefic.
 

Quote

So, I'm going go with a hypothetical here.  Let's say at some point we start repairing brain damage with prosthetics.  Small pieces of machinery that are able to interface with and behave like neurons.  Maybe we invent some kind of nano machine capable of replacing the brain one neuron at a time.  So very gradually the whole brain becomes synthetic.  Do you think this will have changed the person?

No in case that artificial neurons function in similar way and speed than natural neurons, but not sure what is your point.

Quote

Also, re: neural networks that speak languages, yeah, they sort of speak the languages.  They can't compare to native speakers, though.  And language changes.  Pretty quickly, I might add.  A system that doesn't constantly adapt to this through learning is going to find it difficult to communicate.

Haha, but all the things than NN learned to do in just 1 or 2 years, in the past take us 10 to 20 years of coding and we only reach less than 1/3 of the success than NN achieved.
NN started to be popularized only 4 years ago when was discovered that graphic cards boost their efficiency by 20x.  

 

Quote

Point 3: They could forget stuff.  It depends on the nature of their memory.  Certainly we could engineer memory that's far superior to our own.  Actually we kind of did that already, it's what we use digital media for: remembering stuff.  So we humans essentially already have the capacity to never forget things ever or to access far more information than we could ever use.  And that's kind of the natural limitation there.  Even if an AI could accurately remember every detail of its own existence, it may not necessarily have quick access to all of it.  If it wants to remember something, it has to search for it and since the number of memories any sort of intelligence could possess essentially increases without bound, so too must the resources required to access those memories.

Yeah, but a machine may choose if saves info in a NN way or in the basic way.

Link to comment
Share on other sites

22 hours ago, AngelLestat said:

Why it matters?  It only will take 10 to 20 years to go from chimp intelligence to God.

I don't want to die of course, but the lack of predictive future or destiny that an accelerate progression evoke, is something that our brain will reject.
This also suggest a "lack of purpose" no matter the outcome.

And all this has to do with the creation of the first Hard IA.

You are likely to run into roadblocks. Cpu performance has tapered off. Neural networks get interconnect jams if they get larger.
Simply scaling up does not work. 

We are far closer to cheap fusion power than human level AI as I see it
 

Link to comment
Share on other sites

2 hours ago, magnemoe said:

You are likely to run into roadblocks. Cpu performance has tapered off. Neural networks get interconnect jams if they get larger.
Simply scaling up does not work. 

We are far closer to cheap fusion power than human level AI as I see it
 

We are not hitting any progress wall, that silicon limit was jumped many times with different architectures or technologies, in fact I guess we are progressing more faster now.
Artificial Neural Networks found a problem with the cpu performance because like you said is slowing down, but graphic card processors in fact multiply the ANN performance by 20x at the same cost.

Now they found this type of architecture in memories that will also boost a lot the speed:
https://www.youtube.com/watch?v=Wgk4U4qVpNY

They also make a new photonic processor that is 50 times faster than silicon 300 gbs.
 http://news.berkeley.edu/2015/12/23/electronic-photonic-microprocessor-chip/

They also made a new quantum processor using also photons:
http://www.gizmag.com/photonic-quantum-computer-chip/38928/

As last, is less hard to make a quantum learning machine ANN structure than a true quantum computer using quantum properties to represent NeuNet mechanics like this:

Wave function --> Neuron/Perceptron
Superposition (coherence) --> Interconnections
Measurement (decoherence) --> evolution to attractor
Entanglement --> learning rule
Unitary transformations --> gain function

So instead making computers to simulate learning machines, we make learning machines that removes all bottlenecks from the beginning.

As last, you have also special new hardware architecture for ANN like:
http://research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml#fbid=XWBExfjpZAz
http://www.eurekalert.org/pub_releases/2015-12/scp-csd122215.php
http://www.gizmag.com/designless-brain-like-chips/39532/ 

We are in the age of learning machines, these are not computers.

Link to comment
Share on other sites

Two movies come to my thoughts:

Absolutely fantastic: Her http://www.imdb.com/title/tt1798709/

Pretty darn good: Ex Machina http://www.imdb.com/title/tt0470752/ 

I don't think we'll ever get machines that mimic human understanding, emotion, intuition.  Perhaps they'll get close.  What I do think will happen is the development of machines that "think" is inevitable, and their "thinking", rate of learning, will far surpass our own.  But the amount of learning to be done is finite.  Can we build thinking machines that will one day discover and prove new laws of physics for us?  That would be useful.

Edited by justidutch
Link to comment
Share on other sites

On December 30, 2015 at 3:27 PM, AngelLestat said:

But that is a mistake.. because you dont really "code" an AI, you just set up the structure and then it learns by itself.
Then what it learns, nobody can know it, neither their thoughts. I am not saying that they will be bad or good, but not matter what they are, the outcome is unpredictable and our final philosophical destiny "discouraging".  

But you can code in instincts. Really strong ones, too. That can prevent it from surpassing humans, and make it not go against us, and make it enjoy working for us.

Link to comment
Share on other sites

Excellent, easily understandable discussion on this topic: 

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

I highly recommend everyone here read that series, as it is extremely informative and disambiguates a lot of potential sources of confusion on the topic. 

Its general gist is that AI is almost assuredly one day going to be far more intelligent, and thus far more powerful, than humanity, and that this transition holds great promise as well as great danger. My opinions (as well as those presented in the above article) concur with AngelLestat's assessment: It is going to be very difficult to ensure that an A.I. actually does what we expect, as humans are biased towards anthropomorphizing anything with human or superhuman intelligence, when more likely A.I. will simply be amoral, and utterly unlike humans. 

After much thought on this matter, my conclusion was that the only way to ensure our success here is to implement strict safeguards requiring human input on all actions undertaken by the A.I, as well as to give it a directive to not only improve its own intelligence generally but also to further its understanding of humans and their desires. If the majority of humans had their minds "linked" and available for data output to this superintelligent A.I., and it could only act on directives approved by some substantial percentage of the human race (we would not necessarily be required to give input directly; it could likely just scan our memories, personalities, and general experiences and extrapolate a conclusion as to what each of us individually would prefer), then it would essentially be the effector of the will of the human race as a whole, which would presumably be generally altruistic (negative actions will harm part of the constituency by definition, and as such will be selected against) and would thus ensure that the A.I. maintained positive, human-oriented morals. 

Personally, I think this may unfortunately be impractical, but I believe that such a scheme would be the best way of ensuring that the A.I. will remain our faithful servant, rather than, by accident or intent, annihilating us.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...