Jump to content

Researchers shut down an AI that developed its own language.


Recommended Posts

46 minutes ago, Steel said:

The problem that these researchers had is that their incentives didn't emphasise the fact that this was intended as a human-facing system, and so needed to interact in human-understandable English. The AIs actually found a much more efficient way to bargain with each other.

I remember a "history of technology" describing the birth of complex systems (the printing press is presumably a great early example.  It took something like 20-30 modifications of *everything* (things like use paper not velum and a need to change the ink a bit) to start printing books.  The claim was that simple machines *broke* (you could see why a shovel wouldn't work before you used it) but a complex machine would have *bugs* that were less obvious.

Machine learning has "hives", or bugs on the next level.  Things we know as "broken in spec" today, but at least since human programmers try to meet the specs, they hopefully know when the specs are wrong (even if the organisation they work for doesn't want to hear it).  With machine learning, there is no way to communicate "what I mean" to the system developing the answer.

Link to comment
Share on other sites

On 28/07/2017 at 1:24 PM, Steel said:

I have a feeling this is a little sensationalised. The researchers most likely shut the project down because they determined that they wanted the end result to be in English. Since the AI had deviated a way from that, they had to shut it down, re-evaluate their rewards system and then start again. I don't honestly imagine that they shut it down because they were scared of it or were worried it would become out of control and start sending terminators back through time.

^ this

In fact, when I read about it this morning my thought were "oh, so the AI learned something. And that's a surprise because...?"

Link to comment
Share on other sites

2 hours ago, wumpus said:

but at least since human programmers try to meet the specs

Who?? We?!..

2 hours ago, wumpus said:

they hopefully know when the specs are wrong (even if the organisation they work for doesn't want to hear it).  With machine learning, there is no way to communicate "what I mean" to the system developing the answer.

Combinatorial explosion very quickly makes impossible to debug or cover with tests any more or less complicated project, making the bugs absence probabilistic.
So, AI even has an advantage against a human developer: it doesn't give a freak about the managers' fantasies and it never tries to make things look better than they are.
Also, AI has no idea what is "deadline", it understands only "real-time".

Human DNA is full of bugs and commented parts. But was this ever bothering somebody?

P.S.
Also, no bugs — no support. Live and let live..

Edited by kerbiloid
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...