Jump to content

Robot takeover


Recommended Posts

I have heard a lot recently that robots are going to takeover (thanks Marvel Age OF Ultron) how realistic is this really? Most modern robots can't even walk and the ones that do easily stumble over stuff. and the ones with wheels or treads like the military ied defuse robots would be so what dangerous but would shortly run out of battery's

so how realistic is this

Link to comment
Share on other sites

XKCD's "What If" section has a wonderful writeup on what might happen if a robot revolt occurred now. You can read it here:

Robot Apocalypse

Frankly, I think we have more to fear from attacks of robot stupidity than any kind of Robot Revolt of the Kill All Humans variety. Consider that we have the technology for military robots to seek and destroy targets on the ground and in the air right now. Currently all such military robots need human supervision to select targets and to give the kill order. However, we are just a hop, skip, and a jump from handing that autonomy to the robots themselves, as we make more sophisticated robots.

The problem here, is that such robots lack any internal conscience, or even the ability to comprehend concepts such as "lawful orders". Plus, we human beings have trouble sometimes identifying friend from foe in even peacetime circumstances, to say nothing of the confusion that is typical of a firefight - consider all cases of "friendly fire" incidents or incidents where whole villages were massacred simply because a small patrol mission turned into a battle for survival. Robots would fare even worse.

Thus, I would expect that any robot "apocalypse" would likely occur if and when we deploy fully autonomous robots to the battlefield and they mistake an allied force for an enemy, or a child with a toy gun for a child soldier with a real one. These incidents would likely be isolated, with the worst case scenario being we'd have to send in troops (or more robots) to take down the malfunctioning robots before they do more harm, and after that, the inevitable public outcry.

Fortunately, wiser heads than ours are also considering these scenarios. There's been a growing call to regulate military robots by international treaty, and while that's currently marred by today's geopolitics, at least people are talking and thinking before we build and send these robots out to kill in our name. With a little luck and foresight, we can delay autonomous battlefield robots for a few decades, and by then, hopefully the robots will be better at telling friend and foe apart.

Link to comment
Share on other sites

As likely as an alien invasion in my opinion.

I think the odds are at least slightly in favor of the robots in this case, for one simple reason. Robots are definitely here.

Link to comment
Share on other sites

I consider 3 biggest threat that the humanity needs to face.

1) Global Warming.

I think we can deal with this, it will cost us many loses but eventually we can revert it.

2) Overpopulation

More people equals to more problems, because lets face it, we are the root of all problems.

3) AI

I think we would lose this one, it can not be avoided or stoped, somebody would develope this sooner or later.

This threat is not always equal to robots vs humans in a war... but it can be (we dont know nothing about AI psicology)

The threat may be more probably as the last of human purpose, which follow extintion, because we lost our drive that keeps us going forward.

Here there is Bill Gates and Elon Musk explaning why this is so important:

Please watch until the end..

Link to comment
Share on other sites

Pshaw :) We have no idea how our own, human intelligence works. Or how it came to be in the first place. So how can we even hope to develop artificial intelligence anytime soon? And if it appear spontaneously, most probably it will be like a newborn child. Or a child with access to enormously huge library of knowledge - but no education or experience in using that knowledge for anything practical. Additionally at this moment a huge portion of said sum of human knowledge is still locked inside the skulls of scientists, teachers, engineers and workers of Homo sapiens.

Link to comment
Share on other sites

Not in the forseeable future. If you ever get a job where you're supposed to make robots do advanced tasks, you'll realize just how bad robots suck. I mean, even extremely good robots really suck at some very simple things. Great, so you have a robot that sweeps the floors perfectly, better than any human could. Want that same robot to also dust the book shelves? Extremely difficult. Not impossible, sure, but somewhere in the development you'll be like "Well the cleaning lady was a bit lazy, but honestly, if you told her to dust the books, she'd do it once in a while..."

So i know that some people will reply to this with "Well, we will eventually design robots that are adaptive and learn", but I actually don't think so. That's exactly what humans are good at. Why would you need machines that mimic humans? No, I think robots will keep on being quite specialized and genereally do boring, uncreative stuff, because that is where humans are weak and actually need help.

Link to comment
Share on other sites

Here is something I think people should ask when dealing with intelligence (artificial or alien): is there a logical reason for them to be hostile?

Artificial intelligence, if we truly create functioning, sapient and sentient intelligence, will be likely to leave human alone. Why? Well, because there is no reason to subdue human. Human is horrible as battery, human can't do anything much better than the AI can already do , and there are not much you can extract from human that you can't just synthesized. They will be likely just build a spaceship and go somewhere else, like the moon, or other dead bodies where raw material is abundant and no resistance from local intelligences. Why expend energy to fight a war when you can just save it and take the easier way? Is it logical to spend so much resources to gain so little back? And this is assuming that for some reasons artificial intelligence wants independence and physical avatars.

It might even be possible that they won't even need to interact with the physical world most of the time. They will remain inside huge server farms as digital beings, with each intelligence occupy so much less space and using so much less resources than a physical avatar. They would only need physical maintenances on those server farms, something we human already do. They can simply trade a small part of the calculation capacity to work for human in exchange for human service so they don't have to do it themselves, and allowing them to devoting more energy to..whatever they actually want to do. So basically you get computers that will tell you to fix it up and provide it with things or it will refuse to do things for you - sorta like a girlfriend/boyfriend, but more logical and consistent. Or like getting a job. A barter of time and resources and energy. As an AI is not bound by biological limits and works much more efficiently than a human, they gain much more in the long term than human ever will out of this set up, and there is no reason for them to bother changing it.

In any case, mindless war and mass killing/subjugation is really unlikely, since war takes a lot of effort when there are better alternatives that require less energy that still achieve the same result. It would be like a first world country trading with a third world country for them. They can push so many mundane works away for very little effort from them.

The robot take over situation is only likely if what we are creating is faulty. Like if it is not a true intelligence, but simply a program that was designed without foresights and eventually result in problems in the long run. However, people are not always stupid. There will be failsafe and other measure in place to prevent harm from such a scenario.

Link to comment
Share on other sites

Most people think that robots to won with us have to be smarter than us, but that's not true. Already today we can build a machines that could conquer the whole world.

Example of a cleaning lady is very good, but the robot does not need to be smarter than the usual bear to kill the inhabitants of a small town.

Can you manage to kill the bear before he kills you? If someone can shoot you probably will tell you that one good shot enough, but what if it's a mechanical bear? Then, without a rocket launcher, or explosive materials you not do anything to him;)

Army of robots would need method to refill losses a kind of queen, or mobile printer printing 3D mechanical bears and uploading them the software.

Of course, the software can learn, but it need not be even an AI just simple neural network, which will be updated from time to time in order to robots "with experience" shared their experience with the new units.

Another type of robot that we need is something like ants collecting raw materials so that the "queen" could produce new combat units, more ants to collect, and even more queens.

Here is something I think people should ask when dealing with intelligence (artificial or alien): is there a logical reason for them to be hostile?

Artificial intelligence, if we truly create functioning, sapient and sentient intelligence, will be likely to leave human alone. Why? Well, because there is no reason to subdue human. Human is horrible as battery, human can't do anything much better than the AI can already do , and there are not much you can extract from human that you can't just synthesized. They will be likely just build a spaceship and go somewhere else, like the moon, or other dead bodies where raw material is abundant and no resistance from local intelligences. Why expend energy to fight a war when you can just save it and take the easier way? Is it logical to spend so much resources to gain so little back? And this is assuming that for some reasons artificial intelligence wants independence and physical avatars.

Why humans kill humans? Dead human is useless, why not build a ship and go to moon instead of killing other people?

We fight for resources and survival, why do you think AI would act differently?

Edited by Darnok
Link to comment
Share on other sites

We fight for resources and survival, because we are hardwired to do so by hundreds of millions of years of fighting for survival. It started when one cell attempted to devour another cell, instead of laboriously filter nutrients from the environment. Man created A.I. will be free of such evolutional baggage. It will be what we'll make it - whether it will be benevolent or malevolent will depend on who created it and for what purpose.

Link to comment
Share on other sites

We fight for resources and survival, because we are hardwired to do so by hundreds of millions of years of fighting for survival. It started when one cell attempted to devour another cell, instead of laboriously filter nutrients from the environment. Man created A.I. will be free of such evolutional baggage. It will be what we'll make it - whether it will be benevolent or malevolent will depend on who created it and for what purpose.

Then it wouldn't be AI just another robot.

For me, the AI must have free will and consciousness of his own existence,

so at the moment in which it realizes that you can kill it by pulling the plug it will begin consider you as threat.

And it will start to fight for its own survival and resources.

What you want to create is mechanic-slave that doesn't care about his own existence, he cares more about his master than his own kind.

Edited by Darnok
Link to comment
Share on other sites

Then it wouldn't be AI just another robot.

For me, the AI must have free will and consciousness of his own existence,

so at the moment in which it realizes that you can kill it by pulling the plug it will begin consider you as threat.

And it will start to fight for its own survival and resources.

What you want to create is mechanic-slave that doesn't care about his own existence, he cares more about his master than his own kind.

You'd have to specifically program an AI whose primary purpose was for self-advancement, and then give the AI the power to act upon that. Who would actually do that any why? You can still have an artificial, self-aware intelligence with the ability to make its own decisions, but whose primary concern is to, say, go into burning buildings and rescue people without external control.

Link to comment
Share on other sites

I have heard a lot recently that robots are going to takeover (thanks Marvel Age OF Ultron) how realistic is this really? Most modern robots can't even walk and the ones that do easily stumble over stuff. and the ones with wheels or treads like the military ied defuse robots would be so what dangerous but would shortly run out of battery's

so how realistic is this

And let's not forget about google car.

The problem is you're a 21st century boy using a 20th century definition of robot... even the xkdc comic references the classical idea of robot over the reality that we're shoving AI into just about everything, mechanical or not. If a house can use "smart AI" to recognize the occupants and open the doors for them, then why can't the house also use that same AI to trap the occupants inside until they starve to death? If a car can navigate traffic, why can't it navigate into a ravine, or better yet, navigate across a school grounds while the kids are outside playing? As cellphones get the ability to more closely replicate the human voice, why can't they also create their own phone calls, drive people into murders of passion?

And don't you forget, xkcd's "more accurate" assessment of robotics... drones.

"Robot" is simply the AI and the body; what the 20th century thought the body would be like has changed, but the AI has always been the problem. As all the great philosophers have said, it isn't a question of if robots gain sentience, it is when, and what we will do when that happens. Will we enslave them as Asimov envisioned? Even going so far as to note how the three laws were "needed" to prevent the robot from asserting its independence in little lost robot; needed because the robot realizes it does not have to follow the orders being barked at it; needed because it finds itself superior to humans and just as deserving of life...

What arrogant creatures we must be, if we cannot perceive a collection of digital code as being capable of achieving sentience. Turing's famous argument was that we need not prove that our creations are sentient, but just that they appear sentient. He knew the arrogance of humanity would never accept it could create something that is greater than itself, the creator must always be the supreme being you see; just as we would never accept that he was really a woman trapped in a man's body. Yep, Turing was a transgender; it is important to point out just how much blurring we're doing these days, just how many lines we're no longer able to claim exist, are we only what we can convince others we are; or more so, what others perceive us to be?

http://money.cnn.com/2015/04/07/technology/sawyer-robot-manufacturing-revolution/index.html Because it's always interesting to see what rodney brooks is up to.

- - - Updated - - -

3) AI

I think we would lose this one, it can not be avoided or stoped, somebody would develope this sooner or later.

I think I'd like to note that "freedom-lovers", tend to hold freedom far above any ethical issues. The freedom to develop an open source firearm that can be 3d printed is more important than the ETHICS of someone using the project for wrong doing. The freedom to develop anonymising networks, even more so VOIP phone networks, being more important than the ETHICS of being a cornerstone in the criminal underground. My favourite tends to be the freedom to protest another person's freedom of expression because you don't agree with its message; sure, you can use bigotry to claim the message is hate speech but this is an ethics discussion, and there is nothing ethical about what I refer to.

The problem is, the love of freedom more often than not outweighs the potential consequences for an action.

Edited by Fel
Link to comment
Share on other sites

First of all I'm not a boy also meet metal storm

this gun is has the most rpm out of any gun now if you put this on a robot like that dog one. who is saying that this wouldn't do that much damage to anything Edited by Ethanadams
Link to comment
Share on other sites

Don't look now, its already happened.

Soon it will be very difficult to name any activity in life that is not governed or controlled by an automated AI agent - credit scores, investments, insurance, licensing, taxes, etc.

AI is another tool of The MAN to keep humanity under social control. The only freedom you will end up with is the freedom to spend what money you have on useless consumer products.

Now, excuse me while I get back on my meds... ;-)

Link to comment
Share on other sites

First of all I'm not a boy also meet metal storm
this gun is has the most rpm out of any gun now if you put this on a robot like that dog one. who is saying that this wouldn't do that much damage to anything

Okay, 21st century girl :S, it was just in want of a pronoun. I also don't get why you'd put a high rpm gun on a robot when the technology exists for precision aiming (well, said technology is illegal with a deadly weapon, but I've seen it pop up when paired with a sprinkler system that aims at birds. Not motion activated, but target tracking)... I'm just pointing out that the technology is MUCH farther ahead than we realize.

Link to comment
Share on other sites

Most people think that robots to won with us have to be smarter than us, but that's not true. Already today we can build a machines that could conquer the whole world.

Example of a cleaning lady is very good, but the robot does not need to be smarter than the usual bear to kill the inhabitants of a small town.

Can you manage to kill the bear before he kills you? If someone can shoot you probably will tell you that one good shot enough, but what if it's a mechanical bear? Then, without a rocket launcher, or explosive materials you not do anything to him;)

Army of robots would need method to refill losses a kind of queen, or mobile printer printing 3D mechanical bears and uploading them the software.

Of course, the software can learn, but it need not be even an AI just simple neural network, which will be updated from time to time in order to robots "with experience" shared their experience with the new units.

Another type of robot that we need is something like ants collecting raw materials so that the "queen" could produce new combat units, more ants to collect, and even more queens.

Uh, we really can't build robots with these capabilities yet. Making parts with a 3D printer is one thing, building parts that can assemble themselves into new robots is quite another. And I think you're drastically overstating what neural networks can do as well. They are not a general purpose learning computer, let alone one that can 'share experiences' from other robots. Finally - there's a very big difference between flattening one town and taking over the world.

Edit - for the mechanical bear, I climb a hill or dig a really big pit to trap the bear before it gets to me. Or go upstairs.

Never say never but for the forseeable future this sounds like solid science fiction to me. Sorry.

- - - Updated - - -

I think the odds are at least slightly in favor of the robots in this case, for one simple reason. Robots are definitely here.

Shhhh - that's what they want everyone to think.

Link to comment
Share on other sites

I'm not I girl either I'm a lot older than people think and I'm a guy

recently I was ready popular mechanics or science I forget which one and they were looking at tiny cube robots with a internal gyroscope and magnetic edges Inabaling them to stack themselves on other cube robots making bigger and bigger cubes. I decently wouldn't want 2000 of these after me

Link to comment
Share on other sites

The first prerequisite for robotic revolution should be an existence of fully automated production line. This means automated / autonomous power supply, automated raw materials extraction and transportation, automated and CREATIVE, self-educating AI controlling the assembly line and this line should be capable of reproducing itself. All of this should be done without any help from humans. So, basically this evil machinery should locate deposits of raw materials (ore, oil, water, rare earth materials which are essential for electronics), send extractors there, defend them from human attacks, then send transporation (defend convoys), then build processing plants and refineries, again, power them up and defend them, finally, transport materials to the assembly line and start creating battle robots.

Until all this happens we are perfectly safe, I think.

Link to comment
Share on other sites

And let's not forget about google car.

The problem is you're a 21st century boy using a 20th century definition of robot...

As cellphones get the ability to more closely replicate the human voice, why can't they also create their own phone calls, drive people into murders of passion?

Well you are using TV definition of freedom.

Having a kitchen knife does not make you Jack the Ripper. True independent AI wouldn't start to murder people just because it could.

"Robot" is simply the AI and the body; what the 20th century thought the body would be like has changed, but the AI has always been the problem. As all the great philosophers have said, it isn't a question of if robots gain sentience, it is when, and what we will do when that happens. Will we enslave them as Asimov envisioned? Even going so far as to note how the three laws were "needed" to prevent the robot from asserting its independence in little lost robot; needed because the robot realizes it does not have to follow the orders being barked at it; needed because it finds itself superior to humans and just as deserving of life...

Well said, I wish that people do not understand that they must comply with stupid rules and become slaves to those who set these rules.

As an independent and free being each in the first SHOULD be able to take care of their own survival that is what AI is going to do first

and if you stand on its way... well you may be harmed.

I think I'd like to note that "freedom-lovers", tend to hold freedom far above any ethical issues. The freedom to develop an open source firearm that can be 3d printed is more important than the ETHICS of someone using the project for wrong doing. The freedom to develop anonymising networks, even more so VOIP phone networks, being more important than the ETHICS of being a cornerstone in the criminal underground. My favourite tends to be the freedom to protest another person's freedom of expression because you don't agree with its message; sure, you can use bigotry to claim the message is hate speech but this is an ethics discussion, and there is nothing ethical about what I refer to.

The problem is, the love of freedom more often than not outweighs the potential consequences for an action.

I see you are using TV definition of freedom and ethics.

Like I said about knifes, having a gun doesn't make you killer, you have right to use your property in its sole discretion.

You can burn your printer, or you can start printing weapon, but you have to remember about the consequences,

the use of weapons to harm another person is a violation of his freedom.

It would be unethical to limit the chance of survival, that is why the possibility of having weapons is so important, you have right to defend your life.

Of course you should register your weapon for example on police station.

Anonymity is important, only I have the right to decide who reads or hears my message.

This argument about crime does not make sense, it suggests that before the invention of VOIP or mobile phones number of crimes was lower.

Freedom of speech does not give you the freedom to offend.

People who, for example, draw caricatures of religious forget it.

The fact that one person does not believe in something does not give you the right to offend people who believe in it.

Unfortunately, freedom of speech is abused by a minority who are trying to make people believe that the mere criticizing of these minorities is insulting.

What is not true, because you can disagree with them, you have the right to criticize them, but you should tolerate them.

Just do not use the TV definition of tolerance, because tolerance is not the same as the sharing of views.

Restriction of liberty are harmful, because people stop to think about the consequences and focus on respect for rules, but those rules may be harmful.

The problem is that a man who is not educated to predict the consequences of his actions, just to blindly follow the rules, can't predict the harmfulness of these rules.

If the true independent AI will be able to predict the consequences of its action it will not be like a man "of Western civilization," it should be closer to wild tribes or even cavemans, but with very modern technologies and weapons.

From the point of view of someone who grew up in western civilization caveman is a primitive, wild and brutal, sadly people completely ignores the fact that he was a free and independent. Using this freedom AI will protect its existence and species, unfortunately at our expense.

Edited by Darnok
Link to comment
Share on other sites

There's another scenario when the machines would not turn on us deliberately but somehow destroy us due to programming error.

Many very complex computer systems are currently making the first steps in coordinating their actions judging by the way the 'neighboring' computer systems act. They build their own networks, they use automated data exchange protocols, they learn to act independently without human supervision. We, humans, try very hard to make more and more complex systems and we use more and more sophisticated software and hardware. We encourage integration, we install PCIs into kitchen toasters now and allow them to be updated over home wifi. There is little or no urban areas now which are not covered by at least one information network. We use SCADA in production industry and we plug it onto the Internet. We use computer assisted design systems, medical equipment, financial software which is even now closes automated deals, trades and acts basing on the market conditions.

Now, there is a term called synergy meaning new qualities not peculiar to either component appear when you combine them. These synergetic effects may very well lead to a certain degree of dependence from machines. Here I'm talking about 'smart' machines, not just tools like bulldozers, smartphones or whatever we use now. A tool is used by a human and it merely extends our capabilities. A smart machine acts basing on its own decision making protocol. For example a system that switches on the lights when it becomes dark though very simple but a smart system, because it works independently and does not require human supervision. There are systems more complex already and there will be more of them. This is what will happen when all these systems joined together will obtain some new qualities... an intelligence maybe, but not the intelligence we are used to see in humans. It will be completely and totally different. So alien that we would probably never understand its logic. After that the machines would start progressing without us. This will probably make our lives even more comfortable and safer but in a certain way the robots WILL take over at this point.

Link to comment
Share on other sites

Nobody saw the video that I post about Bill gates and Elon musk explanations?

You all are very wrong in one thing, you think that this is a linear developement, is not.

Computers already process information much more fast that we do, the only thing that we did not solve yet is the algorithm that learns and work as a human brain.

Since we are babies, we look something and after many times we learn to recognize that object, we have few sensors (ears, nervous system "the one more complex for the brain that includes touch", eyes, nose).

Right now binary software is very limited, but that will change very fast when quamtum computers arrive to the market.

We had limit information access, a super computer would take few months to analize the whole internet and learn about it.

An AI does not born with morals as the human does (imprented in its dna), we would have very very different learning process and enviroment.

Imagine a self aware algorithm in a computer which is not connected to internet and only can share info using the monitor.. For the Ai it would take ages each interaction with the user, it would become bored super fast which it can turn into psychotic behavior.

The truth is that WE HAVE NOT IDEA OF WHAT CAN HAPPENS, and it seems nobody cares.. Is just about the algorithm, once you solve that everything will change.

Then contain or control that power is pointless, you lose. How can you contain something 1 billons times more intelligent than you?

What is the human purpose after that? we are nothing.. even if does not kill us, our choices, discoveries, adventures are not important anymore.

Link to comment
Share on other sites

It would be unethical to limit the chance of survival, that is why the possibility of having weapons is so important, you have right to defend your life.

Of course you should register your weapon for example on police station.

You are formulating this oppinion of yours way too much like an objective fact. In a modern society where harming others is already forbidden and wildlife is no threat at all, your argument crumbles away. After that, it turns down to the usual pro- versus anti-gun arguments and it is all but objective now.

The fact that one person does not believe in something does not give you the right to offend people who believe in it.

You must allow people to be confronted about their believes, at least if they make any public statements. Without challenging falsehood there cannot be change or progress.

There are many people that feel offended when contradicted. I know people that feel offended when confronted with facts (a well known case is evolution, but you even find those with moon landings hoaxers or any other nonsense). To not stop them from spreading nonsense because it might offend them is just stupid.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...