Jump to content

So Computers Can Hallucinate Now


Nuke

Recommended Posts

I really don't get why it is so hard for people to grasp that the future is here and now.

Programming a Neural Net (enough of that ANN politically incorrect nonsense) is like rearing a child, watching it take its first steps, getting it to learn how to walk, and soon you have it doing chores around the house like a little personal servant. You are dealing with a creature of limited intellect, but still a creature none-the-less. You don't open it up, change a few variables, and hope that fixes problems because you don't even know what any of those variables are doing, it'd be akin to sticking a searing hot rod up your nose to cure insanity.

What I will say is that this is a publicity stunt.

Remember, Neural Networks are trained to do a single task, in this case to "complete the image" based on knowledge of prior images. A neural network that is exposed only to buildings sees buildings when it looks a trees, a network that is exposed only to "You can be buff too if you buy my video of me lifting barbells" deduces that there must be an arm next to the barbell.

See, the REAL interest isn't the nonsensical "dream" trash, but that our mind is doing this ALL OF THE TIME! In other words, what was created was PERCEPTION. The arm is there, but it is blocked by the optic nerve, so we draw it back in. We've never seen a fish before, but we have seen these other things, perhaps it is like one of those.

The "dreams" are a side-effect of perception; but perception is what is more interesting to think about.

*(If artificial is something that was created by humans, and two humans procreate, does that mean you're an artificial human?)

Edited by Fel
Link to comment
Share on other sites

i think for all intents and purposes an aan is a neural network implemented in software, as opposed to hardware (or biology). i think a consumer grade cpu is actually a huge bottleneck to implementing an ann, a super computer might be better at it, or even your video card. but what we really need is a completely different piece of hardware to take it to the next level (and that hardware is currently available and improves at the rate of moore's law i think). i also have a feeling quantum computers would be really good at it because quantum mechanics is chaos incarnate and chaos is good for a neural net.

the neuron equivalent circuit is something you can do in silicon as an analog device with mappable input/output pathways. you could throw some a/d stuff for state load and save. this would probibly be a number of distributed cells with a dac/adc with some memory and an analog mux/demux, so many samples can be taken in parallel, stored in the local ram and read in/out one at a time on the digital side. all so you end up with less skew on the state data. it would allow you to backup a rough approximation of the neural net state and load it as needed.

that would make it possible to train the neural net at the lab and deploy it on a large number of mass produced devices at the factory. if you trained your robot butler to play ping pong but wanted to teach him something else without loosing the ping pong skill, you could just save the state of the neural net, restore the default and train it a new skill (or load up a pre-trained states, which could be sold as "software" for your robot, maybe fuzzware would be more appropriate a term). if the robot butler goes crazy and starts trying to kill you, you can return it to factory default and file the proper lawsuits. :D

Edited by Nuke
Link to comment
Share on other sites

Remember, Neural Networks are trained to do a single task, in this case to "complete the image" based on knowledge of prior images. A neural network that is exposed only to buildings sees buildings when it looks a trees, a network that is exposed only to "You can be buff too if you buy my video of me lifting barbells" deduces that there must be an arm next to the barbell.

See, the REAL interest isn't the nonsensical "dream" trash, but that our mind is doing this ALL OF THE TIME! In other words, what was created was PERCEPTION. The arm is there, but it is blocked by the optic nerve, so we draw it back in. We've never seen a fish before, but we have seen these other things, perhaps it is like one of those.

What I would really love to see is reaching a point where they can teach it to navigate a 3D environment. Not like game-driven AI's do, but by actually looking at the screen, only having access to the visual data that the human gamer has. Even better if it can be a game where it's playing against humans.

Link to comment
Share on other sites

Well, http://search.yahoo.com/search?p=FPGA+Neural+Network

I'd guess we use something similar, but software isn't strictly CPU limited, but also high-level programming limited. It really is amazing when you can look at the assembly and realize all the "good" programming practices people are taught to use are actually slowing the software to a halt (We trust the compiler to undo the bad-code we write that makes it easier for humans to read.) Coding in low level C or assembly, can easily net some major speed increases (especially if you don't "recreate" C++) and allow optimizations to keep consistent use of SSE which allows for massively parallel processing if done right.

In reality though, the x86 and the x86-64 is a pretty horrible CPU design, not really designed for data processing or half of what we really want it to do. GPUs are a much better candidate, but yeah, FPGA is awesome.

"Artificial Neural Network" is just political correctness, someone wanted to distinguish humans from things created by humans; except that's really just to say "Humans have souls and go to heaven, robots don't." That's all that was about.

- - - Updated - - -

What I would really love to see is reaching a point where they can teach it to navigate a 3D environment. Not like game-driven AI's do, but by actually looking at the screen, only having access to the visual data that the human gamer has. Even better if it can be a game where it's playing against humans.

And that is why I thought it was much more interesting to think about perception. Sure, the google car can run around and not crash... but if you try to confuse the AI by introducing something it never saw before its reaction is entirely unpredictable. With this, we're not only learning how to introduce it to a machine, but the aspects that introducing it creates.

It pushes us closer to having a neural net that creates entirely new, yet valid, responses on its own without needing someone sitting there and correcting it.

Link to comment
Share on other sites

fpga based implementations kind of suffer from lack of routing resources. they can synthesize artificial neurons quite well, but there are not enough programmable 'wires' to connect them all together and i dont think they are capable of on the fly rewiring. multiple fpga implementations with a high speed interconnect might do a better job. but you can get asic based neural net chips with 1024 neurons for like $80. so you are probibly better off putting a bunch of those on a high speed interconnect. and thats nothing compared to the kinds of devices being tested in the labs of ibm and others. i cant wait till an on die neural net is standard equipment in consumer cpus, what that would do for things like game ai would be rather impressive.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...