Sign in to follow this  
andrew123

Ghost in the Shell - Jigabachi AI

Recommended Posts

So, how long until we have AI's as capable as those helos. :D

Those are some awesome counter-gyros.

I can already see the US analogue...

6gZDI47.jpg

And there's the modern DIRCM countermeasure-systems.

9rbiwVI.jpg

Edited by andrew123

Share this post


Link to post
Share on other sites

Not sure I can deal with an autonomous making life and death decisions. There is more to war than just killing bad guys. One has to take into account nearby structures, innocents who may be harmed in a strike (collateral damage) I think an AI would be too harsh in that regard. Entire missions have be scrubbed because the human element said the killzone was no good.

I am afraid I am becoming a minority in my thinking, the military really, REALLY, wants autonomous machines; I just don't think it's the right thing to do. But to answer your question... not too much longer. I'd say 10 years.

Share this post


Link to post
Share on other sites
Not sure I can deal with an autonomous making life and death decisions. There is more to war than just killing bad guys. One has to take into account nearby structures, innocents who may be harmed in a strike (collateral damage) I think an AI would be too harsh in that regard. Entire missions have be scrubbed because the human element said the killzone was no good.

I am afraid I am becoming a minority in my thinking, the military really, REALLY, wants autonomous machines; I just don't think it's the right thing to do. But to answer your question... not too much longer. I'd say 10 years.

Do you really think it makes much of a difference? Manned craft and UAV's remotely piloted by men kill innocent people almost on a daily basis already. The fact that there is a huge amount of innocent collateral damage seems of little interest. It is easier to believe that that truck with a family of 10 consisted entirely of evil terrorists.

Please note that I do not want to make this a political debate - it's just the status quo.

Share this post


Link to post
Share on other sites

Well, let's at least give it Asimovs laws. And hope that Skynet fails.

Share this post


Link to post
Share on other sites

IBM came out with 1 million neuron, 256 million synapses, neuromorphic processor this year, that about a frogs brain. Neuromorphics is probably the only way we will get AI that is compact and low power consuming enough to fit inside a robot. In 2011 they came out with a 256 neuron, 256^2 synapses chip, so in 3 years they increase capacity by 4096 times. The human brain is ~80 billion neurons and <800,000 trillion synapses so if neuron equal neurons and synapses equal synapses and that rate of capacity growth holds they should have human brain equivalence in 2020-2025. Of course the human brain uses analogy synapses, these are still digital, but it should not be hard to develop an analog synapse, the synapse it self is just a transistor. The Neuromorphic chips already have a huge speed advantage over biological neural circuits, that 2011 prototype clock in at 2 million firings per second per neuron, biological neurons top out at roughly 1000 per second: an artificial brain at those firing speeds would experience half an hour for every second a biological brain experiences, that is fast enough to see bullets and dodge them, assuming the robot body could move that fast.

Share this post


Link to post
Share on other sites
IBM came out with 1 million neuron, 256 million synapses, neuromorphic processor this year, that about a frogs brain.

1 million neurons-and no glial cells, which are equally vital and have much more complex behaviour.

Share this post


Link to post
Share on other sites

Isn't one of the problems that we are still not entirely sure of the different neuron types and what they actually do?

Share this post


Link to post
Share on other sites
1 million neurons-and no glial cells, which are equally vital and have much more complex behaviour.

It is speculator if Glial cells provide any computational capabilities, they may function only a supports for neurons. Artificial Neurons do not need biological support cells. They do not need other cells to stay alive, to assist in maintaining and establishing synaptic connections, or in paracrine feedback.

Isn't one of the problems that we are still not entirely sure of the different neuron types and what they actually do?

Well sure if the goal is to replicate a human brain, if the goal is simply artificial intelligence that solves problems and accomplishes complex commands, then no. In that case the problem is merely one of computational power and the kinds of computing needed, programing depth, etc. Following a biological morphology merely is to allow for significant energy savings and efficient specialized computation, not for accurate emulation of the human mind (although some are trying to do that), sort of like how both birds and airplane fly and the wings for airplanes were originally model after birds, but the airplane was not trying to mimic the bird only trying to replicate the ability to fly. Frankly I don't think it is wise of us to try to make a machine the thinks too much like a human, not only would such a machine be dangerous but it would also present moral problems if it feels and has free will: we want emotionless machines that obey commands and seek only to obey and serve, like the machines of today only smarter.

Share this post


Link to post
Share on other sites
It is speculator if Glial cells provide any computational capabilities, they may function only a supports for neurons. Artificial Neurons do not need biological support cells. They do not need other cells to stay alive, to assist in maintaining and establishing synaptic connections, or in paracrine feedback.

Glial cells have been shown to be involved in information processing, although the mechanism is unclear. Most notably, it's been found that injecting mice brains with human glial cells made the mice smarter.

Share this post


Link to post
Share on other sites
Glial cells have been shown to be involved in information processing, although the mechanism is unclear. Most notably, it's been found that injecting mice brains with human glial cells made the mice smarter.

That does not show much. Lets break this into brass tax: what computation does you think the glial cells actually do and how are you supposing it can't be replicated? If its activity needed simply to modulate synapses and neural connection strength that can and is already done in neuromorphics. Digital computers above the neuromorphics already provide an ability to on-the-fly observe and re-program neural connections in ways biological brains could not.

I understand if you want to argue spiritualism: that there is no materialistic way we could replicate our surpass human thinking, but don't go about it by half-heartedly suggesting there is some mysterious biological component that multiplies the amount of computing power needed to achieve the same operation artificially: the argument is speculator, an appeal to unknown, and finally even if such a component is proven then it can be replicated. You need to simply come out and say there is a supernatural component needed for human general IQ which can never be materially replicated or surpassed.

Share this post


Link to post
Share on other sites
That does not show much. Lets break this into brass tax: what computation does you think the glial cells actually do and how are you supposing it can't be replicated? If its activity needed simply to modulate synapses and neural connection strength that can and is already done in neuromorphics. Digital computers above the neuromorphics already provide an ability to on-the-fly observe and re-program neural connections in ways biological brains could not.

Change of concentrations of various regulatory chemicals throughout the brain, including neurotransmitters, in response to chemical and elctrical signals from the blood, other glial cells, and neurons. I'm not saying it means computer models won't work, just that current computer models are probably much too simplistic.

Share this post


Link to post
Share on other sites
Change of concentrations of various regulatory chemicals throughout the brain, including neurotransmitters, in response to chemical and elctrical signals from the blood, other glial cells, and neurons. I'm not saying it means computer models won't work, just that current computer models are probably much too simplistic.

Again this is not modeling the brain we are talking about here, just generating "thinking", neural networks as is have done very impressive learning task, just that 256 neuron prototype learned to recognize characters and play pong on visual input. And yet more is to come, cute now, Killing people in 10-15 years perhaps?

Share this post


Link to post
Share on other sites
Not sure I can deal with an autonomous making life and death decisions.

There was an interesting TED talk about that topic a few years ago:

Share this post


Link to post
Share on other sites

Machines could frankly be made MORE moral then people. A robot does not fearing for its life, for it is not alive, nor capable of fear, could decide to kill purely on protocol in accordance with UN convention, if it is smart enough and programmed to follow such protocols consistently and with fidelity.

Imagine robot police officer for example as police conduct is an issue I know is close to lots of people's minds these days.

A warrant is issued and the robot police line up at a residency to make an arrest, they do not "SWAT up": they need no military equipment, not even guns. They announce their presence in according to the constitution without fail, they knock down the door, a dog attacks them, they do not shoot the dog as human police would, one robot simply reaches down lets the dog bite its hand then grab the dog by the jaw, lifting it and holding it mid-air, they enter the residence. They do not force people down to the ground at gun point, throwing stun grenades, smashing or burning, They do not demanding compliance at the tip of a gun: they simply scan faces, recognize in milliseconds who is and is not the suspect, non-suspects are ignored, told to stand aside at most. They search the building and find the suspect armed. The suspect shoots them, they do not shoot back, they merely run up to the suspect, grab the gun out of the suspects hand: if a robot goes down because of a well place shot, they don't care. Then they do not beat the suspect, nor kick, punch, strangle, taze or mace. They simply grab the suspect, pick the suspect up, and carry the suspect like an oversize baby, out of the building and to jail. The suspect can't complain of brutality or racial or even sexual assault by the robot police, the robot police are not capable of any of that, everything was done in perfect accordance to the law and protocol, everything was documented in video, audio, body position, tactile, etc, such that the whole arrest could be recreated holographically in minuet detail for a court of law or for public record (COPS in 3D!)

Now if that became possible, why would we want humans in the loop?

Of course perhaps machines will go the way of terminator instead, it all depends on how well we program them, how well they can be programmed and the psychology that is possible of "sentient" machines. If machines can be made to compulsively follow commands and protocols regardless of how intelligent they are, then they can be made as perfect as I described above, if not well then they might be as "difficult" as skynet, we will have to wait and see, but so far it has been the former, but then again so far machine intelligence has been at most at dog level.

Share this post


Link to post
Share on other sites
Do you really think it makes much of a difference? Manned craft and UAV's remotely piloted by men kill innocent people almost on a daily basis already. The fact that there is a huge amount of innocent collateral damage seems of little interest. It is easier to believe that that truck with a family of 10 consisted entirely of evil terrorists.

Please note that I do not want to make this a political debate - it's just the status quo.

True, but humans also have to answer for their mistakes. They follow chain of command, and what if we get some bored kid who decides to hack into one of those drones and decides to unleash it's entire armament on the kid down the street (think swatting but deadlier)University of Texas has already shown how easy it is to take over a military class drone with commonly bought hardware.

Edit: I dislike armed drones period. autonomous or manned by a virtual pilot.

Share this post


Link to post
Share on other sites



I'm not satisfied until we have fully fledged replicants. :)

Share this post


Link to post
Share on other sites

Only thing I have to "say" about this:

Maybe spoilers for some, I still recommend watching the movie.

Stephen Falken: The whole point was to find a way to practice nuclear war without destroying ourselves. To get the computers to learn from mistakes we couldn't afford to make. Except, I never could get Joshua to learn the most important lesson.

David Lightman: What's that?

Stephen Falken: Futility. That there's a time when you should just give up.

Jennifer: What kind of a lesson is that?

Stephen Falken: Did you ever play tic-tac-toe?

Jennifer: Yeah, of course.

Stephen Falken: But you don't anymore.

Jennifer: No.

Stephen Falken: Why?

Jennifer: Because it's a boring game. It's always a tie.

Stephen Falken: Exactly. There's no way to win. The game itself is pointless! But back at the war room, they believe you can win a nuclear war. That there can be "acceptable losses."

David Lightman: [typing] What is the primary goal?

Joshua: You should know, Professor. You programmed me.

David Lightman: Oh, come on.

David Lightman: [typing] What is the primary goal?

Joshua: To win the game.

Stephen Falken: General, you are listening to a machine! Do the world a favor and don't act like one.

Share this post


Link to post
Share on other sites
Only thing I have to "say" about this:

Maybe spoilers for some, I still recommend watching the movie.

Elon Musk certainly mistrusts AI's.

I'm more neutral, because the benefits of the singularity seems only limited by the imagination of our minds.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this