Jump to content

So Computers Can Hallucinate Now


Nuke

Recommended Posts

And here comes the part where it will be proven that robots can even do art better than a human.

One would have thought that in the 'jobless' world of the future, we would've still at least had creativity in order to give ourselves relevance. So much for that.

Edited by vger
Link to comment
Share on other sites

I read that article the other day (Google's). I was going to post it here, but decided against it. Trippy?... yes. Weird?... indeed. I'm not quite convinced they (the developers) even know what it is that they're looking at (the resulting images) in their interpretations. Reminds me of Electric Sheep.

Link to comment
Share on other sites

I read that article the other day (Google's). I was going to post it here, but decided against it. Trippy?... yes. Weird?... indeed. I'm not quite convinced they (the developers) even know what it is that they're looking at (the resulting images) in their interpretations. Reminds me of Electric Sheep.

Do you mean to say that it isn't a step closer to creating sentient AI?

I'm not sure what you're getting at by saying "they don't know what they're looking at." Do you mean that you have an alternate interpretation? Or simply that it's too soon to jump to conclusions about what it means?

I don't think it's fair to compare it to electric sheep, which seems to be little more than cloud-based fractal generation. That's a lot less intuitive than the 'art' images.

Edited by vger
Link to comment
Share on other sites

Do you mean to say that it isn't a step closer to creating sentient AI?

I'm not sure what you're getting at by saying "they don't know what they're looking at." Do you mean that you have an alternate interpretation? Or simply that it's too soon to jump to conclusions about what it means?

I don't think it's fair to compare it to electric sheep, which seems to be little more than cloud-based fractal generation. That's a lot less intuitive than the 'art' images.

I'd say it's too soon to jump to conclusions. Same with the notion of it being a step - maybe (at best) a baby step... the machine is still running off a set of given rules, not making them up on its own.

Link to comment
Share on other sites

I'd say it's too soon to jump to conclusions. Same with the notion of it being a step - maybe (at best) a baby step... the machine is still running off a set of given rules, not making them up on its own.

Since it is a neural network, it is pretty much doing both at the same time.

Link to comment
Share on other sites

It seems a couple of people are seriously misunderstanding how these images came to be, and what is special about them. It certainly was not available 10 years ago, though something may have produced look-al-like results in certain cases.

Link to comment
Share on other sites

It seems a couple of people are seriously misunderstanding how these images came to be, and what is special about them. It certainly was not available 10 years ago, though something may have produced look-al-like results in certain cases.

https://en.wikipedia.org/wiki/Electric_Sheep

Initial release 1999

And this is just a public thing.

Not identical, but these things are definitively not new.

Link to comment
Share on other sites

https://en.wikipedia.org/wiki/Electric_Sheep

Initial release 1999

And this is just a public thing.

Not identical, but these things are definitively not new.

Again, you seem to be misunderstanding the nature and purpose of these images. Maybe the posted articles are not making it entirely clear what is going on, so I would suggest you read the ones here and here. The patterns are nothing new, but that the software has learned these patterns from images and tries to interpret and fill in other images is. That is something different completely than some fractal algorithm - with or without sheep.

It is fundamentally a different mechanism.

Working alongside a PhD student at New York University’s Courant Institute of Mathematical Sciences, Fergus and two other Facebook researchers revealed their “generative image model†work on Friday with a paper published to research repository arXiv.org. This system uses not one but two neural networks, pitting the pair against each other. One network is built to recognize natural images, and the other does its best to fool the first.

Yann LeCun, who heads Facebook’s 18-month-old AI lab, calls this adversarial training. “They play against each other,†he says of the two networks. “One is trying to fool the other. And the other is trying to detect when it is being fooled.†The result is a system that produces pretty realistic images.

And a little bit scary:

The neural networks don't always get it right. Without the proper parameters, networks include various kinds of "noise" along with the image. AS most pictures of a dumbbell have an arm pumping iron, one network deducted that dumbbells must have arms, and at no dumbbell image is complete without a flesh-toned appendage.

2015-06-22-1-1-600x145.png

Edited by Camacha
Link to comment
Share on other sites

The mention of the word "Facebook" alone gives me the creeps. Any 'practical' application of such a system, should it ever come to fruition, will most surely be used for marketing.

Are you really surprised? The fact that IT can be used for marketing, probably has the most to do with why the tech has developed so rapidly in the past decade. It's all been about creating more efficient "salesmen." All the conveniences the rest of us have gotten from it is strictly secondary. It's no different from television. Television doesn't exist for the news or the entertainment. It exists for the sponsors.

Edited by vger
Link to comment
Share on other sites

And here comes the part where it will be proven that robots can even do art better than a human.

They already have the software they need to do that in the form of 3D rendering/modeling engines. Hook one of those up to one of these things and see what it spits out. :P

Link to comment
Share on other sites

They already have the software they need to do that in the form of 3D rendering/modeling engines. Hook one of those up to one of these things and see what it spits out. :P

The rendering/modeling engine was still just a 'medium' though. That's like saying a paintbrush is better than a human. The modelling program is a tool.

What we have here is a paintbrush creating new works without the presence of a painter.

Link to comment
Share on other sites

These pictures are amazing. However, the neural networks seem to have a limited number of recognizable patterns in their database. It seems that they have been fed pictures of dogs, pagodas, and roman-style buildings, so the same recognizable pareidolia patterns seem to crop up everywhere. It would be interesting to see what would happen if it was fed with a much larger database, like Google Images.

It reminds me of this article about an experimental AI system at Google that seems to obsess about cat pictures:

http://www.wired.com/2012/06/google-x-neural-network/

Link to comment
Share on other sites

It reminds me of this article about an experimental AI system at Google that seems to obsess about cat pictures:http://www.wired.com/2012/06/google-x-neural-network/

LOL, I was already thinking that if this computer were fed all of the internet, any inkblot you gave it would probably come out looking like cats and pornography.

Link to comment
Share on other sites

Really awesome to see us finally getting around to developing neural computing and advancing "robotic vision".

computers have needed this sort of capacity for a while. Numerical computation needs to be absurdly complicated in order to do what neural computation can do simply (and if humans are any indication, the opposite is also true).

We'll probably see a new generation of computers that combine logical and neural processing cores, dividing tasks between them for maximum efficiency

Link to comment
Share on other sites

they are starting to put fpgas in xeons now so that they can accelerate a wider array of algorithms to ease the cpu load in server farms. a couple years ago i read about neural net on a chip devices being tested. when those are as well developed as fpgas i can imagine a neural net being shoehorned into the die as well to increase the number of problems the cpu can solve (things like improved random number generation).

Link to comment
Share on other sites

I'd say it's too soon to jump to conclusions. Same with the notion of it being a step - maybe (at best) a baby step... the machine is still running off a set of given rules, not making them up on its own.

It's a bit more complicated than that. The pointe of ANNs is that it basically is the machine making decisions. Well, you could always argue that all processors work after a predefined set of rules, but it's the same with our brains ((bio-)physics).

ANNs work similar to that. The only layer in which things are predefined is a very fundamental one. In our (biological) case it's biology and physics, in the case of artificial neural networks it's the basic neural model (how do signals travel? what do neurons do?) and the learning algorithm. Keep in mind that the development of ANNs started as an attempt at recreating biological neural networks.

You only show an artificial neural network what to do. You actually train it ("train" is the correct scientific terminology). The network has to do the "learning" part 100% on its own. That is the point of artificial neural networks. And once it's ready, it will do and decide everything on its own, there is virtually nothing the programmer has created (except for maybe setting one or two paramaters, but that is not "creating"). That ANN may even react differently to exactly the same situation/input. Artificial neural networks sometimes (more or less usually) grow so large (depending on model) and complex that it is practically impossible to comprehend everything/anything that is happening inside them. You could say, they sometimes become their own territory.

As an example: I've trained a neural network to do basic digit recognition. I have absolutely no idea how to program digit recognition. Not a single little idea. All i did was creating a model of ANNs, implementing it, initializing it, and telling it what to learn (of course also showing/training). At the end it could recognise digits, even those that were written in a way it had never seen before. It wasn't me doing this. I still have no idea how to programmatically do digit recognition.

So is it the computer making actual decisions? To some extend, definitely.

they are starting to put fpgas in xeons now so that they can accelerate a wider array of algorithms to ease the cpu load in server farms. a couple years ago i read about neural net on a chip devices being tested. when those are as well developed as fpgas i can imagine a neural net being shoehorned into the die as well to increase the number of problems the cpu can solve (things like improved random number generation).

It's the Synapse project from IBM you are thinking about. They've made huge progress since then, i recommend you to check it out. Really interesting.

Edited by Tonnz
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...