Jump to content

Researchers shut down an AI that developed its own language.


Recommended Posts

I have a feeling this is a little sensationalised. The researchers most likely shut the project down because they determined that they wanted the end result to be in English. Since the AI had deviated a way from that, they had to shut it down, re-evaluate their rewards system and then start again. I don't honestly imagine that they shut it down because they were scared of it or were worried it would become out of control and start sending terminators back through time.

Link to comment
Share on other sites

17 minutes ago, Steel said:

I have a feeling this is a little sensationalised. The researchers most likely shut the project down because they determined that they wanted the end result to be in English. Since the AI had deviated a way from that, they had to shut it down, re-evaluate their rewards system and then start again. I don't honestly imagine that they shut it down because they were scared of it or were worried it would become out of control and start sending terminators back through time.

A little sensationalised the same way an electron is a little smaller than you :)
Else you are probably very correct, they wanted english, had the result been interesting as an sign of real intelligence  they would have it running but it was pointless.
You get this result a lot then trying to get machines to invent stuff, the evolved circuits is famous, using fewer transistors than we would design it to use however the design tend to be analogue in design and would not be very practical. 
You also get this then training animals, even humans and organizations seeing how the purpose of schools is to do well on exams. 
 

Link to comment
Share on other sites

13 minutes ago, magnemoe said:

A little sensationalised the same way an electron is a little smaller than you :)
Else you are probably very correct, they wanted english, had the result been interesting as an sign of real intelligence  they would have it running but it was pointless.
You get this result a lot then trying to get machines to invent stuff, the evolved circuits is famous, using fewer transistors than we would design it to use however the design tend to be analogue in design and would not be very practical. 
You also get this then training animals, even humans and organizations seeing how the purpose of schools is to do well on exams. 
 

Well if you get technical that's not strictly true. While an electron has no physical extent and is thus infinitely smaller than me, this article has not been infinitely sensationalised :wink:

Back on topic again: this problem is actually more one of human researchers not fully understanding what they are incentivising the AI to do, rather than the AI being "disobedient" or going "off program". At the end of the day, all these types of AIs do is tend towards whatever outcome/behaviour is best incentivised.

Edited by Steel
Link to comment
Share on other sites

I told you.

Although, I have to say that given this thing involves Facebook, and the disclaimer goes "hi ! We're getting more viewers from Facebook", it's more likely to be a junk. There's an inherent problem with today's mass-social-media.

Link to comment
Share on other sites

49 minutes ago, YNM said:

I told you.

Although, I have to say that given this thing involves Facebook, and the disclaimer goes "hi ! We're getting more viewers from Facebook", it's more likely to be a junk. There's an inherent problem with today's mass-social-media.

Very true :P I read the article to see if it was a hoax, but it seemed legit, so I decided to share. I (Definitely) didn't think it was close to becoming self-aware

Edited by Spaceception
Link to comment
Share on other sites

5 hours ago, Steel said:

I have a feeling this is a little sensationalised. The researchers most likely shut the project down because they determined that they wanted the end result to be in English. Since the AI had deviated a way from that, they had to shut it down, re-evaluate their rewards system and then start again. I don't honestly imagine that they shut it down because they were scared of it or were worried it would become out of control and start sending terminators back through time.

This is my big issue with AI.  For some things (such as chess playing), debugging/testing an AI isn't all that hard.  If it plays better chess/wins more often it is "doing it right" if not it is "buggy".  For more complicated things, debugging/testing/grading such an AI becomes much harder.  Presumably "using a non-English language" is on the "buggy" side.

Link to comment
Share on other sites

From the article:

Quote

They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages.

I don't find repeating words to indicate quantity as particularly logical, intelligent or even efficient. It's easy enough to use the system when you need to indicate just two or three instances of something, but after four or five, it is entirely unusable for humans since we are barely capable of counting in such a manner. While computers would have no problem in counting, it is still entirely inefficient usage of time. It's much faster to say you need 5 000 apples, than it is to say you need apple apple apple apple apple apple apple apple apple apple apple apple...

22 hours ago, YNM said:

I told you.

-snip-

https://en.wikipedia.org/wiki/Be_Right_Back

 

Link to comment
Share on other sites

19 hours ago, wumpus said:

This is my big issue with AI.  For some things (such as chess playing), debugging/testing an AI isn't all that hard.  If it plays better chess/wins more often it is "doing it right" if not it is "buggy".  For more complicated things, debugging/testing/grading such an AI becomes much harder.  Presumably "using a non-English language" is on the "buggy" side.

This is an major issue with critical systems, you can not verify how the AI would react to unexpected events. 
Even if you test many cases this will not prove how it would react in an slightly different one and you can not assume it has common sense. 

Say you use an AI for stock trading, this should work well as its fast and has no emotions so it would have no issues cutting losses and taking calculated risks. 
 However some event make it go crazy and loose a lot of money. 
This happened with normal software, it was an common software used to automating stock trades, the problem was that lots of stock brokers used this, an small dip i an selection of stocks got all the auto traders to sell lots of stock, this reinforced the sell function and it ended in an short mini crack and the US stock exchange had to close down. Later it was put up restrictions on auto trading software.  
 

Link to comment
Share on other sites

1 hour ago, magnemoe said:

This is an major issue with critical systems, you can not verify how the AI would react to unexpected events. 
Even if you test many cases this will not prove how it would react in an slightly different one and you can not assume it has common sense. 
...
 

To be fair to AIs, most of the time you can't verify how a human will react to unexpected events, and they're just as likely to react without common sense (even worse, they've got these things called "emotions" and "impulses" that tend to make a mess of things).

Edited by Steel
Link to comment
Share on other sites

10 minutes ago, Steel said:

To be fair to AIs, most of the time you can't verify how a human will react to unexpected events, and they're just as likely to react without common sense (even worse, they've got these things called "emotions" and "impulses" that tend to make a mess of things).

In general yes people are weird, in an professional setting with experienced people no, here people tend to behave rational enough. Computers on the other hand are very predictable but also limited. 
You can have an man in the loop this is common among humans too as in its either team work you you need permission. 
However this negotiate lots of the benefit of an AI outside of cheap labor who is an major one, also you can have it do all sort of non critical stuff and just check random samples. 

Link to comment
Share on other sites

On 7/29/2017 at 1:09 AM, YNM said:

I told you.

-snip-

 

22 hours ago, Shpaget said:

Well, thought someone there had to do it.

Spoiler

 

21 hours ago, magnemoe said:

In general yes people are weird, in an professional setting with experienced people no, here people tend to behave rational enough. Computers on the other hand are very predictable but also limited. 
You can have an man in the loop this is common among humans too as in its either team work you you need permission. 
However this negotiate lots of the benefit of an AI outside of cheap labor who is an major one, also you can have it do all sort of non critical stuff and just check random samples. 

Human have gathered years of self-experience and millions of years of generational knowledge. Any AI won't have them. But for sure your AI could run lots of simulation or something ?

Link to comment
Share on other sites

1.
Any language is just a tool to exchange with information.
To get information = to decrease indeterminacy.
By physical definition, delta-Information = - (negative sign) delta-Entropy.

Statistical weight of a system: W = number of possible states of the system.
Entropy = k ln W.

On a message receiving: delta-Information = - delta-Entropy = k ln (Wbefore/Wafter)

Can they give a sample when this inter-AI exchange made a system statistical weight decrease?

If not - it's not a language, just a noise, maybe autocorrelated.

2.
Any natural language meets the Zipf's law. Even when it's not a language.

I.e. a frequency of every "atom" of the language is proportional to its position in the alphabet sorted by frequency.

Can't find word "Zipf" in the article. Did they check this?

Edited by kerbiloid
Link to comment
Share on other sites

Sometimes I think @kerbiloid has also developed his own language, which relies primarily on a combination of various web-sourced blocks of tangentially related information to convey meaning... :D 

 

 

 

(In case this wasn't obvious, I'm making a joke. No insult meant.)

Edited by Streetwind
Link to comment
Share on other sites

http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922
The AI simply developed an slang for exchanging items who was it purpose. 

Tabloids jump this in their run for hourly click bait. 
They jumped the shark decades ago, now the shark look like this 
https://wow.gamepedia.com/File:Epicus_Maximus_mousepad_art.jpg

Link to comment
Share on other sites

On 28.7.2017 at 11:51 PM, wumpus said:

This is my big issue with AI.  For some things (such as chess playing), debugging/testing an AI isn't all that hard.  If it plays better chess/wins more often it is "doing it right" if not it is "buggy".  For more complicated things, debugging/testing/grading such an AI becomes much harder.  Presumably "using a non-English language" is on the "buggy" side.

The problem is kids before learning theyr native language mostly use a self invented one. Mostly these are phonetical translations for some words but for as example twins they invent absolutely new communication protocols... are they "buggy" too?

If we translate it on a KI the basic language fot this beeing is binary. And how the Researchers decide that a bunch of 0 and 1 lead to english outcome?

Link to comment
Share on other sites

On 28/07/2017 at 10:51 PM, wumpus said:

This is my big issue with AI.  For some things (such as chess playing), debugging/testing an AI isn't all that hard.  If it plays better chess/wins more often it is "doing it right" if not it is "buggy".  For more complicated things, debugging/testing/grading such an AI becomes much harder.  Presumably "using a non-English language" is on the "buggy" side.

 

Spoiler

For now I'm going to ignore the difference between "buggy" and "not what we want the AI to do". Bugs are problems inherent in the code, so an AI can be doing things other than what you want it to do, but if its underlying code is fine, it's not technically buggy, just poorly incentivised.

*DISCLAIMER* Any reference I make to "AI" below is probably actually meaning "machine-learning system"

It's actually not as hard as you might think. If fact, on a basic level, you would (or at least could) actually use the same type system for a simple chess AI as for a complex chatbot - a basic version that's quite widely used is a scoring system. Each time the AI runs through a scenario, it will be given a score depending on how close it gets to the incentivised goals. Over time the AI learns which combination of actions leads to the highest scores and so tends towards that behaviour. In this way it's actually no harder to train an AI to do complex tasks than to do simple one (assuming, of course, that it has been properly incentivied), it just takes longer (much longer).

The problem that these researchers had is that their incentives didn't emphasise the fact that this was intended as a human-facing system, and so needed to interact in human-understandable English. The AIs actually found a much more efficient way to bargain with each other.

Edited by Steel
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...