Jump to content

Sentient Computer


Voyager275

Recommended Posts

Perhaps the problem isn't a lack of processing power so much as the lack of an appropriate architecture? IBM's SyNAPSE chip has 1 million neurons, 256 million synapses, is the size of a postage stamp and runs on 70 mW. And that's only the beginning of what is very likely to be a new paradigm in computing.

https://youtu.be/t4kyRyKyOpo

The things that I am most frightening lately is the singularity.

That point in our technology devepment where things start to grow so fast that is impossible to predict the outcome.

An IA is capable of this, I made my point in a very similar topic:

http://forum.kerbalspaceprogram.com/threads/119444-Robot-takeover?p=1910223&viewfull=1#post1910223

Its not about computer power compared to the brain, but even if that is the case it can be overcome,

http://www.scientificamerican.com/article/computers-vs-brains/

Is all about how the IA relates information. If we crack the way how the brain does it.. Then a Hard IA arise, as we can see in the video of PakledHostage a computer can learn to understand images, even if we dont teach how to do it, all that info about if the images are wheels or doors, or cars at 45 degree, it can learn it by it self if takes into account the text close to that image in each webpage.. there is also an "alt" attribute in html that is added to each image with a description for that image... which is used by search engines and blind people.

And even if we have care, it does not matter, an IA will arise sooner or later, it may be in the garage of some kid or in some other place.

Leaving aside if the AI will be good or bad.. we can forget about all our silly low development dreams... like mars colonization or those things, because it will not matter.

When you have something that learn so fast, it may find that is pointless to colonize other planets because there are many other options to solve a problem, then few days or years later, it may discover that even those options are pointless because it can be solve it with other ways..

Maybe in 10 years it knows so much, that might decide to leave the universe. (without dying, just leaving it)

We may not be a menace for an AI, but it may kill us or just control everything to avoid the rising of other AI.

So yeah.. that is how our future will end. Even if does not kill us, what is out purpose after that?

Link to comment
Share on other sites

The things that I am most frightening lately is the singularity.

That point in our technology devepment where things start to grow so fast that is impossible to predict the outcome.

An IA is capable of this, I made my point in a very similar topic:

http://forum.kerbalspaceprogram.com/threads/119444-Robot-takeover?p=1910223&viewfull=1#post1910223

Its not about computer power compared to the brain, but even if that is the case it can be overcome,

http://www.scientificamerican.com/article/computers-vs-brains/

Is all about how the IA relates information. If we crack the way how the brain does it.. Then a Hard IA arise, as we can see in the video of PakledHostage a computer can learn to understand images, even if we dont teach how to do it, all that info about if the images are wheels or doors, or cars at 45 degree, it can learn it by it self if takes into account the text close to that image in each webpage.. there is also an "alt" attribute in html that is added to each image with a description for that image... which is used by search engines and blind people.

And even if we have care, it does not matter, an IA will arise sooner or later, it may be in the garage of some kid or in some other place.

Leaving aside if the AI will be good or bad.. we can forget about all our silly low development dreams... like mars colonization or those things, because it will not matter.

When you have something that learn so fast, it may find that is pointless to colonize other planets because there are many other options to solve a problem, then few days or years later, it may discover that even those options are pointless because it can be solve it with other ways..

Maybe in 10 years it knows so much, that might decide to leave the universe. (without dying, just leaving it)

We may not be a menace for an AI, but it may kill us or just control everything to avoid the rising of other AI.

So yeah.. that is how our future will end. Even if does not kill us, what is out purpose after that?

I don't get the whole "bad killbot AI" concept. It's highly improbable that a AI will start killing people of its not built to do so.

Link to comment
Share on other sites

Many non-crackpots and extremely well educated and professional people claim either FTL, perpetual motion, or magic. It's no loss to me.

That's a straw man argument. Some people with impressive credentials may indeed make the types of claims you mention but those things haven't been demonstrated. Jeremy Howard, on the other hand, demonstrates examples in his TED talk video of the types of advances he's speaking about. But you've already said that you won't watch his talk or others like it so I wouldn't expect you to know that.

Link to comment
Share on other sites

Supercomputer size is inversely proportional to available transistor size. If we end up having affordable computers with the power of today's supercomputers, then the supercomputers of that time will still be as big, but much more powerful.

True, and an pc today is an 1980 supercomputer.

Still we are running into problems going down in size, other materials like graphene might increase clock speed who is nice as its translate directly into performance, transistor count does not.

However that increase is also limited, you can go 3d as many chips do today this however increases heat, bundle this together and you get 1000-10.000 performance increase.

Main issue is that current computers are designed for number crunching. It something they do well but humans are pretty slow at.

They are not designed for intelligence. Take something like an walking robot like big dog, any mammal would perform better in broken terrain who is kind of an IQ test.

- - - Updated - - -

I don't get the whole "bad killbot AI" concept. It's highly improbable that a AI will start killing people of its not built to do so.

This, the AI might malfunction so you put down constrains, you do the same with humans or animals. Yes you get bugs.

If you let an AI handle your wealth its some danger it will give it away or do something stupid, its an danger that your broker pocket everything and jump on an plane.

Then we get smart AI this will be an learning process and a long debugging.

Link to comment
Share on other sites

This talk showed some very impressive stuff. However, i'm not scared of the machines yet. These systems basically just classify things, i.e. associate one thing with another, if i understand correctly. It will not suddenly make an AI emerge with its own goals, ideas, creativity and so on. IMO, one must be much more afraid about people abusing these technologies (loss of jobs - like the guy in the video said, surveillance, stupidification of humans due to "offloading" intellectual activity to the computer, and other ways we haven't even thought about yet).

Perhaps some day (not within my lifetime) mankind will understand what consciousness/sentience actually is and then we will replicate it in the computer. I believe we will obtain the hardware requirements to runs this on much sooner. A benevolent AI overlord might actually be a good thing :D

Link to comment
Share on other sites

I don't get the whole "bad killbot AI" concept. It's highly improbable that a AI will start killing people of its not built to do so.

It's a question of what values the software has, and what values it does not have. The classic example is an AI installed to run a paperclip factory. All it values is the number of paperclips produced, but it does not value human life, so because it's a superintelligent machine with a factory, it expands the factory, gathers resources from the world and builds more paperclips. The fact that it's making paperclips from the ruins of New York doesn't bother it slightly.

To be clear, you shouldn't be worried about this happening in the near term, but it's a clear problem with intelligent machines that there are no obvious solutions to, so we need to think hard about controlling intelligent machines before we build one and let it run the world for us. And we probably do actually want a giant computer to run the world, because it frees all the rest of us from having to work, so we can play KSP all day or garden or whatever you do with your free time.

Edited by comham
Link to comment
Share on other sites

the main point, that we can not even full understand our self, how can we pretend to understand an AI?

Many people imagine going with the hand with a machine picking flowers and admiring the landscape.

That might be true, only if we try to copy or simul neuron by neuron the work of our own human brain, which in that case, the intelligence grow will not be so big, it will be bound to technology and processor power.

But that is the most difficult way to make an AI.

Setients, conscience, it only depend on how the info is related.. If we crack that.. then maybe it will be also possible with our current pc computers, nobody knows for sure.

Because we dont understand what origin conscience, Maybe is something simple with few lines of codes..

As the video explain, only 1 algoritm with few mods, can do any kind of task and be very good at it.

Genetics algoritms are also very good to design and create things without any clue on design. (we are the proff of that)

So if some AI emerge with this system, then is not bound to our slow development.

And try to predict how it will feel or what will happens is pointless.

How an IA will feel having a talk with a human? when in fact it needs to wait trillons of cycles for each reply. Maybe this is so boring that it will try to kill it self.

The true, we dont have any clue.

But if all that is not enoght, we are developing quantum computers. And google and some other big companies are using the first prototypes (not full quantum computers yet) in comercial applications.

Link to comment
Share on other sites

It's perhaps my lack of imagination, or understanding of the matter, but I don't see how a computer will ever 'understand' things as humans do. But heck, maybe its just a matter of time.

We will eventually build computers with incredible amounts of processing power and speed, and access to a basically unlimited number of facts in databases. These computers will be able to decide what objects are through imagery and optical input, that is well on the way as a result of neural networking. Software developers are also currently creating programs to be able to separate and recognize distinct auditory signals from within a cacophony. Perhaps one day the other human 'senses' will be incorporated into computing: smell, touch.

But think of a cat. I like cats. Although I like dogs, if I had to choose I would take a cat. I like how they will arch their back when they rub against your leg; how they purr when you pet them, but do it too much and you run the risk of a love-bite. I like how they have an independent nature, and how at least once a day they go crazy for ten minutes or so.

Will a computer ever truly understand a cat in this way, or only as a collection of accepted facts when queried? Humans are so much more than just a collection of senses too. There's emotional context to be considered - I would say I have some sort of emotional response to just about everything I can see and think about at the moment, like cats. And past history, which I suppose is part of the emotional context. In fact, my understanding of a cat, now that I think about it, is much more on the emotional and experience level than from a factual level. Fine, I know a cat is a mammal, has four legs and a tail. But I don't know what exact genus it is (is that the right word?), or how it is technically different than a dog. But I know very much so that a cat is different than a dog. Any two year old can tell you that. Even right now, if I wanted to, I could get this slow and weak computer I am writing on to query a pile of databases and collect enough stats and facts to find how a cat is different than a dog. But I don't have to - because I know. And I know from having played with and fed and walked and cleaned up the poop of both cats and dogs. It is this; this 'knowing' that I don't think AIs will ever have.

Again, probably my lack of understanding, but I just can't see an AI ever 'understanding' something the way I do. That all being said, I am open to the possibility that AIs will develop their own brand of 'intelligence' that is beyond different than our own; in fact I'll put down money on it. And lastly, I think that yes, it should be done. I think the development of AI is a good thing; I think there are so many good things that could come as a result of it. I only wish I was smart enough to contribute!

Edited by justidutch
Link to comment
Share on other sites

It's a question of what values the software has, and what values it does not have. The classic example is an AI installed to run a paperclip factory. All it values is the number of paperclips produced, but it does not value human life, so because it's a superintelligent machine with a factory, it expands the factory, gathers resources from the world and builds more paperclips. The fact that it's making paperclips from the ruins of New York doesn't bother it slightly.

To be clear, you shouldn't be worried about this happening in the near term, but it's a clear problem with intelligent machines that there are no obvious solutions to, so we need to think hard about controlling intelligent machines before we build one and let it run the world for us. And we probably do actually want a giant computer to run the world, because it frees all the rest of us from having to work, so we can play KSP all day or garden or whatever you do with your free time.

The paperclip factory AI would also fight other factories it would be an more tactical target as they could be converted to make paperclips.

Also the scenario is just like the grey goo scenario but far less likely as its lots of ways to stop the factory AI from direct order up to cutting supplies or power and ending in air strikes.

First should work well enough, the AI will has to work with production quotas and probably various types of clips.

Having an giant computer run the world would be an obvious bad idea as here the "cost effective" AI solutions would be used. Having all the traffic light showing red would decrease the number of traffic accidents

Link to comment
Share on other sites

Having an giant computer run the world would be an obvious bad idea as here the "cost effective" AI solutions would be used. Having all the traffic light showing red would decrease the number of traffic accidents

On the other hand, having all traffic lights showing green would increase the number of accidents and therefore scrap metal, perhaps increasing cost-effectiveness?

Link to comment
Share on other sites

It's perhaps my lack of imagination, or understanding of the matter, but I don't see how a computer will ever 'understand' things as humans do. .....

That depends on what you mean by understand. One way to think about an AI is that it is an inference engine. Able to create new conclusions/associations based off of known/observed data. An example is this. The AI does not know if Trees eventually die. But it knows two "separate" facts. Trees are plants. Plants eventually die. Therefore it now knows that Trees eventually die. This is how humans tend to figure it out, with new observations forming the capacity for new inferences. Humans don't tend to have the ability to go through their whole knowledge pool and ensure that every possible inference combination has been followed through, whereas an AI might have the time to do that and certainly could have the capacity to try with every fact that it knows.

Now your later text implies that what you are thinking of as "understand" is less understanding and more an emotional feeling. An AI doesn't need emotions to be able to make inferences, plans, etc. But maybe you want those emotions to exist so you can adjust the decision making. Depending on a variety of design choices it is possible to simply end up with emergent style behaviors or you can design in aspects of the system, "emotional variables" if you will, to aid that. An example of an emergent style behavior. You have a self driving vehicle, it can make inferences, and its observations can be used to form "memories" or new facts with which to make inferences. This car happens to break down in a rain storm. Noted. This car pops a tire driving next to a river. Noted. A few more similar happenstance events, and your car might very well end up with an inferred fact that the presence of liquid water causes a higher chance of breakdowns. This could be "important" insofar as if the car needs to make a decision and it is evaluating a possible pair of routes. One longer but away from water, another shorter but across a bridge. It might make the decision that because of the "fact" that the presence of water increases the likelihood of a breakdown, it should take the longer route. Especially if somewhere along the way to this point in the car's life, it has picked up the inference that if its driver is in a hurry, it should select routes that minimize time, but especially minimize the risk of a breakdown.

If you add in "emotional variables" then you can design in aspects to it either in hardware or software that guide what it "likes" and "does not like". If for example you have a texture/pressure sensor (basically an advanced touch pad) and you want the robot/AI to protect it from unnecessary wear, you would have some sort of software either in the sensor itself, or in the robot, that has inputs of smoothness and pressure. The rougher a texture, the less pressure the sensor should be exposed to. But for the most accurate data, you want as much pressure as you can get. So in the X/Y space of smoothness vs pressure, you would have some sort of curve to exist under. The optimum space is right beneath the curve, but not too far beneath it. So for your "happiness" value (which your AI would be designed to want to keep high, since we are getting emotional here), it gets happy points if it is under the curve, but more if it is in the sweet spot. It loses happy points if it goes above the curve. Now, your AI as well is aware that biological items are fragile and so it shouldn't put too much pressure on them. It is also aware that animals (being motile biological items) will move away if too much pressure is put on them. For some reason or another (perhaps you told the AI to, or the cat did it by itself) the sensor is exposed to a cat rubbing against it. The AI will be somewhat restrained in its petting because of the previously mentioned facts, but it will know that if the animal is pressing harder (arching its back into the petting) that the "limit" of pressure has not been reached. Given a sufficiently high ruggedness of your sensor (where there is no force the two can put on each other that doesn't violate the 'the cat will move if too much pressure' fact, that results in damage), this results in a feedback loop between the AI and the cat. The cat presses harder into the petting, the AI likes this because it gets closer to the curve and because it wants to maximize its happy points it gets. The AI also has a new higher setpoint for acceptable pressure to put on the cat. The cat in question likely is moving a bit, necessitating the robot to move the arm's position for better measurements, a new petting (that would be stronger than before, because the AI knows it can press harder to reach the pressure the cat put on the sensor), the cat nuzzles/arches against this new petting, putting more force, the loop continues. Meanwhile when the cat starts purring at some point, this might result in some vibrational waveforms against the sensor that bring it ever so slightly closer to the curve. Eventually of course, you reach the point where the cat has had enough, love-bites and heads off providing the AI with a new upper boundary on pressure. If the AI is allowed to take actions that have no meaning, but increase the number of happy points it has, the AI would now have a behavioral loop that involves a set of actions that is in effect, if not necessarily meaning, petting cats. And because petting the cat gives it higher happy points (something the AI wants to keep high), the AI can be said to "like" petting a cat. Incidentally, a dog's fur is generally rougher, and thus depending again on that sensor and its force curves, it might or might not 'feel better' for the robot to pet a cat instead of a dog. Thus providing a 'fact' for the AI to form a preference on.

Given the (admittedly longwinded and a bit obtusely specific) example I have provided, what in effect as far as actions taken in this world is different between your "understanding" of liking cats and the AI's? An important thing to think about is, at the core of how a human that has never experienced a cat (or pet) before, is the method that I have described a decent analog/metaphor for how they would come to enjoy petting a cat? Don't try to go too far into the weeds with examples of other things about why you like cats unless you think it is a particularly unique but also globally shared item. You just saw all the text I had to go through explaining a situation where an AI might learn to like petting cats, I can probably come up with similarly obtuse examples for the other reasons, just perhaps take my explanation as more of a method than a specific example.

The method in question is simply that it is possible through a set of circumstances for an AI to come up with a set of behaviors that it wants to repeat based only on the capabilities/restrictions of its body, and the workings of its mind. Coupled with the example of the water-phobia from the first section of my text, this should mean that it is quite possible for sufficiently autonomous systems to generate a set of behaviors/actions that it likes doing and ones it likes to avoid (dislikes doing) without any prompting or necessarily intended design on part of the creators.

Link to comment
Share on other sites

I read it but it doesn't state what kind of processing power would be needed if sentience is based of information processing speed

Measuring by processing power is the wrong approach. Computers have long since passed the human brain in terms of raw operations per second. That's not what you need to create a sentient machine.

It's the programming. Take reading as an example. When you signed up for this forum, in addition to entering your name and e-mail address and stuff, you had to pass a visual recognition test by viewing a moderately-scrambled image of some letters and numbers, then typing in those letters and numbers to prove you were a human, thereby preventing hackers from using bots to create large numbers of accounts.

I'm willing to bet you were able to read those moderately-scrambled letters and numbers instantly. Look around your world right now, and notice how many different fonts the printed word is printed in. Right now I've got some bills, some stamps, an expired coupon for two dollars off if you buy forty-six gallons of ice cream, a "Saving Up to Conquer The World" piggy bank, and a bag of Fritos chili cheese corn chips. And also a sticky note reminding me to pay the rent tomorrow, which is mildly alarming because I wrote it last week. Anyway, the writing on all these things is in completely different sizes and styles, but I can read all of them with zero effort. Try writing a computer program to do this. IT IS PHENOMENALLY DIFFICULT.

The simple and instinctive process of reading, of converting an image of letters and numbers into hard data, is extremely difficult to quantify. How do you tell a computer to differentiate between a letter C and a letter G? The human brain can do it instantly. A computer is easily foiled and makes a lot of mistakes.

There's lots of other examples, but you get the idea. We have sufficiently-powerful computers, but we are nowhere near being able to program sentience into any of them. Bottom line: the rise of Skynet is not gonna happen in our lifetimes.

Link to comment
Share on other sites

lately, computers have almost the same power of recognition that us.

Of course the best captcha recognition learning algorithm are owned by google or other search engines and they are secret.

In fact, now the best test to detect bots from humans, is also a self learning algorithm, which if is not sure, start to add tests or questions.

https://youtu.be/MnT1xgZgkpk?t=4m7s

Ok this guy in the video, has the best answer of how to survive to an IA.

His idea is brillant. The only problem, that IA may choose to ignore its goal.

Edited by AngelLestat
Link to comment
Share on other sites

I don't get the whole "bad killbot AI" concept. It's highly improbable that a AI will start killing people of its not built to do so.

You don't even need sentient AI to make autonomous weapons.

All that's needed is automatic target acquisition, which has been around for a while already, and add a mechanism to pull the trigger if a target has been acquired.

Nowadays more fancy AI can be added to make the weapons platform almost fully autonomous (see Google's self-driving car), distinguish between friend and foe, and to make use of cover and concealment.

That's why experts in the field of AI, with support from a couple of tech- and science celebrities are urging for a ban on the development of such weapons:

http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

Link to comment
Share on other sites

You don't even need sentient AI to make autonomous weapons.

All that's needed is automatic target acquisition, which has been around for a while already, and add a mechanism to pull the trigger if a target has been acquired.

Nowadays more fancy AI can be added to make the weapons platform almost fully autonomous (see Google's self-driving car), distinguish between friend and foe, and to make use of cover and concealment.

That's why experts in the field of AI, with support from a couple of tech- and science celebrities are urging for a ban on the development of such weapons:

http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

Yes, but most people think of a scenario when a normal general-purpose AI which was never built to do anything related to weapons starts killing everyone because it became sentient.

Link to comment
Share on other sites

You don't even need sentient AI to make autonomous weapons.

All that's needed is automatic target acquisition, which has been around for a while already, and add a mechanism to pull the trigger if a target has been acquired.

Nowadays more fancy AI can be added to make the weapons platform almost fully autonomous (see Google's self-driving car), distinguish between friend and foe, and to make use of cover and concealment.

That's why experts in the field of AI, with support from a couple of tech- and science celebrities are urging for a ban on the development of such weapons:

http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

Well autonomous weapons dates back to WW2 autistic torpedoes, one of them managed to hit its the ship it was launched from.

Today we have two types of autonomous weapons, one is a self targeting projectile like the autistic torpedo or heat seeking missiles. also bombs also use optical targeting, camera who focus on the target object.

The other type is weapon systems who will identify and fire on targets it identify. This is mostly air defense or anti missile systems. The close in phalanx guns on US warships work like this.

You turn them on then you get an threat waring and turn them off afterward. You might have a man in the loop if you have more time, system identify an threat and you decide if you should fire or not.

Future systems will be simpler stuff like sentry guns who work much like minefields. This would work much like the anti air gun in that you get an alert, look at the camera and see that its an enemy attack then arm the gun.

You will probably also get drones who are able to pick targets themselves. here it would be dangerous to not have a man in the middle who check out the targets, not only to avoid it to hitting friendly or civilians but also to avoid it to be fooled. Most of the targets hit in the war between Nato and Serbia was decoys. An AI would be even simpler to fool.

You might give it permission to fire at will inside an limited area for an short time nothing more.

And weapons is dangerous anyway. an anti air gun in South Africa started shooting while in vertical position, as just one barrel was firing it rotated around while doing so killing multiple people.

This was an mechanical fail on an manually operated gun.

You will have various safety systems including mechanical safety switches. An autonomous drone would only be dangerous while out on an combat mission or live firring excise and even so it would not be more dangerous than an human going postal with the bonus that it would be harmless once out of power or ammo.

Edited by magnemoe
Link to comment
Share on other sites

Really the more likely near term scenario to be worried about is a cascade of automated systems firing at each other. A simple example.

Lets say North and South Korea both have invested some amount of resources into having automated systems on the border. Some of these would be anti-missile/artillery weapons. Now, lets say a random iron heavy chunk of space rock starts falling out of the sky and just happens to be heading into North Space from the South. If conditions were right, and the North's safety settings particularly unsafe, they might interpret this as an incoming artillery shell and fire off some rounds to blast it out of the sky. The South's system sees these rounds and fires its own systems against it. The North's trigger on these, but the North perhaps being a bit paranoid has allowed their systems the ability to autonomously provide counter battery fire to take out the South's artillery. The South's systems suddenly start reporting actual damage/casualties and up the ante in response. Back and forth, back and forth. All of this could happen in a very short period of time escalating to the point where both sides think a real fight is happening and start to commit to it.

That is a bit more realistically immediate than Skynet.

Link to comment
Share on other sites

Well autonomous weapons dates back to WW2 autistic torpedoes, one of them managed to hit its the ship it was launched from.

Today we have two types of autonomous weapons, one is a self targeting projectile like the autistic torpedo or heat seeking missiles.

Autistic torpedos?

Did you select this word to make some sort of point, or is English not your native language? (I'm not trying to be confrontational, I'm genuinely curious).

Link to comment
Share on other sites

Really the more likely near term scenario to be worried about is a cascade of automated systems firing at each other. A simple example.

Lets say North and South Korea both have invested some amount of resources into having automated systems on the border. Some of these would be anti-missile/artillery weapons. Now, lets say a random iron heavy chunk of space rock starts falling out of the sky and just happens to be heading into North Space from the South. If conditions were right, and the North's safety settings particularly unsafe, they might interpret this as an incoming artillery shell and fire off some rounds to blast it out of the sky. The South's system sees these rounds and fires its own systems against it. The North's trigger on these, but the North perhaps being a bit paranoid has allowed their systems the ability to autonomously provide counter battery fire to take out the South's artillery. The South's systems suddenly start reporting actual damage/casualties and up the ante in response. Back and forth, back and forth. All of this could happen in a very short period of time escalating to the point where both sides think a real fight is happening and start to commit to it.

That is a bit more realistically immediate than Skynet.

Agree, this however has happened with humans too and it don't need an external reason someone shooting by accident can be enough.

Its an unconfirmed claim that an Norwegian sounding rocket almost set off WW3. it was shot from north of Norway against the arctic to study polar light. It was announced but the exact time might not as they needed the polar light. Russia military wanted to use this as an readiness exercise and someone messed up and thought it was an real attack.

Weirder as this was after the cold war and you would not do an first strike with one missile anyway.

Link to comment
Share on other sites

Autistic torpedos?

Did you select this word to make some sort of point, or is English not your native language? (I'm not trying to be confrontational, I'm genuinely curious).

He probably used a translator, which thought that "autistic" is a synonym for "retarded". But in this case it's not, as "retarded" refers to torpedo delay, and not developmental delay of humans.

Link to comment
Share on other sites

Really we already have some weapons that can be considered automatic. There are missiles where you can select the type of target (IE: Look for tanks or other armored vehicles), launch them into an area bound by GPS coordinates, and they will autonomously search the area for valid targets, discriminate against the possibilities, and either kill one or self destruct (if no targets available and flight time is expired). The only decision humans make with these (they are fire and forget) is where to set the search box and what type of target they should look for. Now, while there isn't a great likelihood of taking out a sedan when "tank" was selected, imagine the uproar if it took out the local hillbilly-equivalent camo-painted minivan.

Link to comment
Share on other sites

Really we already have some weapons that can be considered automatic. There are missiles where you can select the type of target (IE: Look for tanks or other armored vehicles), launch them into an area bound by GPS coordinates, and they will autonomously search the area for valid targets, discriminate against the possibilities, and either kill one or self destruct (if no targets available and flight time is expired). The only decision humans make with these (they are fire and forget) is where to set the search box and what type of target they should look for. Now, while there isn't a great likelihood of taking out a sedan when "tank" was selected, imagine the uproar if it took out the local hillbilly-equivalent camo-painted minivan.

A certain "new tech is evil" bias is also involved. If a old technology causes an accident, nobody really cares. However if a new technology is being used in the same field and causes a similar accident, a massive uproar will follow.

Link to comment
Share on other sites

He probably used a translator, which thought that "autistic" is a synonym for "retarded". But in this case it's not, as "retarded" refers to torpedo delay, and not developmental delay of humans.

Or its a typo for 'acoustic', which fits with a torpedo acquiring and hitting the ship that launched it - with WW2 technology, the best you can do is home on the loudest sound in a given arc.

Link to comment
Share on other sites

Autistic torpedos?

Did you select this word to make some sort of point, or is English not your native language? (I'm not trying to be confrontational, I'm genuinely curious).

Magnemoe is from Finland, IIRC. Maybe cut him/her some slack? We're an international community using English as a lingua franca. Those of us who speak English fluently should be glad we don't have the same challenge of using a second or third language to participate in this forum.

Link to comment
Share on other sites

Thanks, I am one of those..

------------------------------------------------

This by the other way, may allow a little jump in the Moore's Law, this was wandering in the news lately, but is the first time that I saw it well explained in this video:

It will come out in few months.

The 3D XPoint is currently in production at Intel Micron Flash Technologies, and the makers say that the technology will have a wide range of applications including machine learning, immersive 8K gaming, fraud detection, genetic analysis, disease tracking, and social media.

http://www.gizmag.com/intel-micron-memory-breakthrough/38664/

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...