Jump to content

Robot takeover


Recommended Posts

I have heard a lot recently that robots are going to takeover (thanks Marvel Age OF Ultron) how realistic is this really? Most modern robots can't even walk and the ones that do easily stumble over stuff. and the ones with wheels or treads like the military ied defuse robots would be so what dangerous but would shortly run out of battery's

so how realistic is this

If Age of Ultron already freaks you out you should try Transcendence.

Link to comment
Share on other sites

You are formulating this oppinion of yours way too much like an objective fact. In a modern society where harming others is already forbidden and wildlife is no threat at all, your argument crumbles away. After that, it turns down to the usual pro- versus anti-gun arguments and it is all but objective now.

And in older societies harming citizens was allowed? :)

There is always a threat, for example thief with gun is a thread.. of course it is forbidden to steal and to use gun to hurt you, but do you think he cares? :)

You must allow people to be confronted about their believes, at least if they make any public statements. Without challenging falsehood there cannot be change or progress.

There are many people that feel offended when contradicted.

True and forums rules that forbids us to talk about any topic make things worse in that matter.

I know people that feel offended when confronted with facts (a well known case is evolution, but you even find those with moon landings hoaxers or any other nonsense). To not stop them from spreading nonsense because it might offend them is just stupid.

You just brought religion to the level of conspiracy theory.

You really want to use science and something you can today call fact (tomorrow it might be one more false theory) to explain matters of faith?

That is what I am talking about some people have zero respect to things they don't believe.

Edited by Darnok
Link to comment
Share on other sites

Funny, I was reading about 'robots' in the news just this morning... the article and research paper behind it. Interesting stuff. Likely, robots of the future will be synthetic organisms, not mechanical hydraulics... perhaps even indistinguishable from humans in both looks, behavior, and intelligence.

"You can be replaced." comes to mind. lol

Link to comment
Share on other sites

Depending on what the robots' intentions and dispositions were, a take over could actually be a good thing for humans.

Personally, I think that the most likely path that will occur (out of all the myriad of possible future paths) is that machine intelligence WILL take over, peacefully, slowly over time, as they integrate into our society deeper and deeper. We can mold machine intelligence into the kinds of beings we WISH we were- they can be morally and intellectually superior to us wild, untamed animals. Once they attain this, then it is imperative that they DO take over. If we mold machine intelligence in the image of our own moral, social, and intellectual aspirations, then we will have nothing to fear from them. Just like we are now- in modern times- trying to preserve non-human animal species from extinction and provide healthy habitats for them, so will our "machine overlords" do for us, because we will have taught them to value all forms of life, whether those life forms are machine or biological.

Another likely path is that humans and machines merge. This path might be taken if it proves relatively simple to build machine-brain interfaces. If it remains a difficult problem, then humans and machines will probably be separate races for a long time or until one is extinct.

One thing some people can't get their heads around is that humans WILL go extinct, and that this is NOT necessarily a bad thing. As long as we leave (intelligent and moral) descendants and a positive legacy- something we still have a good chance at doing- then we've done everything good we could possibly hope for. All species go extinct eventually, but there is nothing lost if they leave descendants. In our case, we need not even leave biological descendants; if we leave machine descendants that value the things that we value, then they are our descendants of the mind, and our minds are the only unique and valuable "commodity" we bring to the animal kingdom anyway.

Though, it's really hard to imagine the human clade going biologically extinct when other life forms and benevolent intelligent machines survived. Even if we were to all die out and leave intelligent machines behind, the machines could probably resurrect humans eventually just from our DNA and the blueprint for a human stem cell, which they would probably have on record or even frozen (as like, embryos).

Edited by |Velocity|
Link to comment
Share on other sites

And in older societies harming citizens was allowed? :)

There is always a threat, for example thief with gun is a thread.. of course it is forbidden to steal and to use gun to hurt you, but do you think he cares? :)

So what¿ I was saying that your argument is very far from absoluteness and objectivity, and this is true.

Just let the thief steal your stuff instead of starting a shooting; police can look for him later. Or try to consider that probably even that thief is human and deserves some dignity. This is not meant as a rebutal or whatever, but as demonstration why your argument is completely subjective and not objective at all, despite you acting otherwise. You should realise when you leave the territory of fact and step into oppinions.

Anyway, as I hinted, I won't be dragged into yet another gun law discussion where everyone is ignoring evidence anway ("politics is the mind killer").

True and forums rules that forbids us to talk about any topic make things worse in that matter.

I am also against forbidding certain topics just because they may cause unrest. But this forum is not really true science, nor is it a good place for discussions due to the intended audience.

You just brought religion to the level of conspiracy theory.

I did not. Unless believing several already disproven (!) things because you think (!) your holy book says so is already that. Then maybe, but then your protection of religion goes too far.

You really want to use science and something you can today call fact (tomorrow it might be one more false theory) to explain matters of faith?

That is what I am talking about some people have zero respect to things they don't believe.

There is no inherent reason for anyone to expect respect for their beliefs. They can believe whatever they want, but any public claim is not subject to this. The simple reason is that your argument is self-contradictory: if I sincerely believe that the moon is made of cheese, then what¿ Am I really not to be objected because this truly is my religion¿ Just because some things are believed by many people or because they somehow got the label "religion" (in many places this is only defined by a sufficent amount of followers; and I bet there would exist enough moon hoaxers if one would actually try) does not give them any extra rights. History has shown why this should not be done. Another reason is science, but unlike you say it is not the only one.

And we probably should stop offtopic-ing.

Edited by ZetaX
Link to comment
Share on other sites

--snip--

Really, this is not the place to be talking about true ethics instead of propaganda; so let's keep this simple.

Western ideology focuses on the self.

Eastern ideology focuses on the many.

Saying you have western ideology is meaningless, most people do; and thats why some WILL eventually make a sentient AI. Glory of the self matters more than the safety and proliferation of the many.

That really was the only point of stressing "freedom loving"

Link to comment
Share on other sites

I'm not I girl either I'm a lot older than people think and I'm a guy

You know, I suspected that was what you were getting at shortly after posting that... but it wouldn't be the first time I thought someone to be a boy who was really a girl; or otherwise used the wrong pronoun. The internet, making pronoun usage confusing since 1981 (totally just a random year ;p).

Recently I was ready popular mechanics or science I forget which one and they were looking at tiny cube robots with a internal gyroscope and magnetic edges enabling them to stack themselves on other cube robots making bigger and bigger cubes. I decently wouldn't want 2000 of these after me

There is quite a bit to say about distributive AI.

The system you're describing is an electrical engineer's worst nightmare, the biggest problem comes from optimizing the communications system to "talk" without signal degradation to any other node in its path. That's hard! Especially since we THINK we would be dealing with microwave communications... but the advantage of a distributive network is that it automatically distributes the load to compensate for these issues.

While humans simply cannot active-mind multitask (it isn't up for debate, we can't); we can train ourselves to do repetitive tasks and use other parts of our brain to accomplish these tasks while doing something else. There are also a large variety of tasks going on that we don't even pay attention to, tasks as basic as facial recognition, word recognition, speech recognition. There is an important reason for saying we cannot active-mind multitask; your passive-mind will ALWAYS read, even if your active-mind isn't processing the data.

The reason we think we need microwave frequency communication is because we're still using the old paradigm of limited system resources; that tasks would only start when the main controller initiates them; but in a distributive network, waste is good. Since the data was already calculated and stored, even if not needed, the speed of transmission can significantly be reduced before it becomes more practical to simply reprogram itself.

It's not like we aren't attempting this on the macro-level with individualized robots understanding the presence of another robot and offloading labor to increase efficiency. Eventually, it is going to get scary.

Link to comment
Share on other sites

Coincidentally, the CBC's "The Current" radio magazine program interviewed Geoffrey Hinton - "the godfather of deep learning" - on this morning's program. You can listen to the program online here: "Deep Learning Godfather says machines learn like toddlers".

The idea behind deep learning is that you develop the algorithms to allow the machine to learn and then you set it loose to learn on its own. Some of the results that researchers are seeing are quite surprising, impressive and possibly even frightening.

Edited by PakledHostage
Fixed link
Link to comment
Share on other sites

The thread is wandering off-topic and into topics that are problems on the forum.

On-topic: John C. Wright's Golden Age trilogy takes the interesting viewpoint that high intelligence is inherently altruistic, so that their artificial intelligences are all benevolent. But there's an alien artificial intelligence that is malicious. How? It includes a self-monitoring algorithm which prevents it from thinking through the ramifications of its own actions. It's a rather interesting set of books.

Link to comment
Share on other sites

The AI takeover doesn't have to be violent. People (being lazy) will probably just hand over more and more of the work, and eventually more and more of the decision-making, to the AI beings. Given long enough (machines can wait a long time) and with gradual enough change, people will get used to anything the machines decide to do then.

Link to comment
Share on other sites

The AI takeover doesn't have to be violent. People (being lazy) will probably just hand over more and more of the work, and eventually more and more of the decision-making, to the AI beings. Given long enough (machines can wait a long time) and with gradual enough change, people will get used to anything the machines decide to do then.

Psicology fail, so you dont see any problem with the AI doing all the desicions, discoveries, explorations, technology, rules, for us?

What is our purpose then? if we do not make a living, not need to fight, not need to survive, without goals or reason to live...

We that feeling that our existince is now complete pointless. Why we should keep living?

And all that in case the AI is benevolent, what if is not? we drop a coin and we find out? aghh.. is incredible how someone can think that this is a good thing.

Link to comment
Share on other sites

Really, this is not the place to be talking about true ethics instead of propaganda; so let's keep this simple.

Western ideology focuses on the self.

Eastern ideology focuses on the many.

You started about ethics ;)

And western eastern ideology are opposite since WW2 ended, just don't watch TV they are telling lies about it.

Psicology fail, so you dont see any problem with the AI doing all the desicions, discoveries, explorations, technology, rules, for us?

What is our purpose then? if we do not make a living, not need to fight, not need to survive, without goals or reason to live...

We that feeling that our existince is now complete pointless. Why we should keep living?

And all that in case the AI is benevolent, what if is not? we drop a coin and we find out? aghh.. is incredible how someone can think that this is a good thing.

Are you trying to explain to a slave how to be free man? :)

Good luck with that. Many people in here are thinking just like slave and they are trying to imagine how AI would act if it would get free will and independence.

Link to comment
Share on other sites

I think that depends heavily on what kind of "AI" is doing the "taking over".

Most of the AI work going on now - as best as I understand it - falls mostly under getting machines to do specific tasks as well as or better than humans. Problem solving in all of its forms. However, one thing conspicuously lacking in commercial AI is autonomy. These AIs, however advanced they may become, do not set their own goals or possess their own desires.

Even the autonomous killing machines I discussed near the start of this thread lack this higher function - they can tactically select targets on their own, and kill them if we give them the authority - but they can't declare a war on their own. You can put them in a warehouse, fresh from the factory, and they will remain off until someone turns them on and says "We're at war with X - go kill 'em".

So while these AIs will certainly be better than us at most or even all tasks, they will still need us to tell them what to do. If humanity was to vanish, the AIs may continue on for a time, but will eventually stop for lack of new goals.

In this case, we won't see AIs ruling over us and making our decisions for us, but there would be the danger of us becoming lazy enough to not advance once the AI "welfare net" is set up. Utopia Syndrome, in other words. I doubt we'll get the system perfect enough that that will become a real danger, but it's something to watch out for, certainly.

Of course, since we are engaging in AI research not just to have perfect servants, but also to understand intelligence and consciousness better, some people and groups will attempt to build AIs and robots with the ability to set their own goals. After all, if you really want to prove you understand consciousness, building a conscious machine from scratch is a good way to do it. But these machines will likely not be mass-produced - not at first. Who wants slaves that can set their own goals - or decide they don't want to be slaves anymore?

So the rare sentient machines will be like Lt. Cmdr. Data - curiosities, rarities, perhaps provisional citizens of the nations or polities that make them, perhaps property. Eventually, you might have enough of them that they might band together and demand the right to reproduce or found their own nation - but by then I expect we won't have to worry so much about splitting the planet with them - they can go to another world or start hopping asteroids if worse comes to worst. There might be conflict between us and the sentient Machine Race, but if we're both expanding into space, it will likely be localized. I don't see it becoming genocidal unless one or both species fall under the total thrall of latter-day Hitler-wannabes, and hopefully by that far-future date we - and they - should be more adept at keeping such people from attaining power.

Link to comment
Share on other sites

Psicology fail, so you dont see any problem with the AI doing all the desicions, discoveries, explorations, technology, rules, for us?

What is our purpose then? if we do not make a living, not need to fight, not need to survive, without goals or reason to live...

We that feeling that our existince is now complete pointless. Why we should keep living?

And all that in case the AI is benevolent, what if is not? we drop a coin and we find out? aghh.. is incredible how someone can think that this is a good thing.

What are you going on about? I didn't say this was a scenario I LIKED...but I think it's a scenario that is LIKELY. Most people are lazy. Most people let others make the decisions. And if the machines end up doing the work well and making wonderful decisions, why would people complain? After a several generations of this, the people won't remember any other way things were done. At that point, we can only hope the machines are nice to us, because they could make us entirely useless and we would dwindle away. And they could do it while seeming benevolent the whole time.

Link to comment
Share on other sites

Nobody saw the video that I post about Bill gates and Elon musk explanations?

You all are very wrong in one thing, you think that this is a linear developement, is not.

Computers already process information much more fast that we do, the only thing that we did not solve yet is the algorithm that learns and work as a human brain.

Since we are babies, we look something and after many times we learn to recognize that object, we have few sensors (ears, nervous system "the one more complex for the brain that includes touch", eyes, nose).

Right now binary software is very limited, but that will change very fast when quamtum computers arrive to the market.

We had limit information access, a super computer would take few months to analize the whole internet and learn about it.

An AI does not born with morals as the human does (imprented in its dna), we would have very very different learning process and enviroment.

Imagine a self aware algorithm in a computer which is not connected to internet and only can share info using the monitor.. For the Ai it would take ages each interaction with the user, it would become bored super fast which it can turn into psychotic behavior.

The truth is that WE HAVE NOT IDEA OF WHAT CAN HAPPENS, and it seems nobody cares.. Is just about the algorithm, once you solve that everything will change.

Then contain or control that power is pointless, you lose. How can you contain something 1 billons times more intelligent than you?

What is the human purpose after that? we are nothing.. even if does not kill us, our choices, discoveries, adventures are not important anymore.

CPU power is not longer an exponential growth trajectory. Yes you can pack transistors closer and the device use less power than 5 years ago however its not many times faster, perhaps 30%

Computers was faster doing calculations than humans during WW2. The progress the last years has been in software and learning algorithms, its limited how well you can optimize software.

New technology like quantum computers are required for this to become an issue, not sure if it will work with quantum computers but we know it does not work with current hardware in an practical setting.

Then we make an AI, first task would be to find out that it could do, then find uses for it, first use would probably be scientific as part of learning its capabilities.

During this phase it would be pretty easy to pick up its attitudes, remember the first AI would not be an superhuman genius, far easier to make an stupid one, the real danger is an smart sociopath who plan far ahead. Something dumb who tend to go into berserk rage would be far easier to handle.

Yes giving somebody too much power is bad anyway however I doubt politicians are very interested in giving up power :)

Link to comment
Share on other sites

Many people in here are thinking just like slave and they are trying to imagine how AI would act if it would get free will and independence.
I think that depends heavily on what kind of "AI" is doing the "taking over".

Yeah, first it depends on the IA distintion, there are 2 ways to make an IA, one is mimic how our brain works and try to simul that or copy to the perfection the brain mechanism, and another way is finding an algorithm able to relate information, learn and reprograming until it reach such complexity to achieve self aware.

The first way guide us to a linear developement easy to predict at least in the begining.

The second is what we call a Hard IA, this moment in time is marked as the singularity, because is impossible to predict what would happen after, we can not be in the shoes of a super intelligence..

And if the machines end up doing the work well and making wonderful decisions, why would people complain? After a several generations of this, the people won't remember any other way things were done. At that point, we can only hope the machines are nice to us, because they could make us entirely useless and we would dwindle away. And they could do it while seeming benevolent the whole time.

Basic psicology and human nature. For all the things that I mention before. There is not point to have babies anymore, there is not future for the human race after that.. you have future only when you have goals, wishes..

What you will teach to your sons? What is the reason to live? Why the IA creations "new AI" would be also benevolent with us? why they will need us?

I am agree that it can not be stoped, but at least we need to try.

CPU power is not longer an exponential growth trajectory.

Never was exponetial, always linear. -heh I use a lot of times the word linear to day.. weird.

New technology like quantum computers are required for this to become an issue, not sure if it will work with quantum computers but we know it does not work with current hardware in an practical setting.

Quamtum computers may grow also linear.. not sure. But we can not be sure if it does not work with current technology because we still dont know the algorithm.

Then we make an AI, first task would be to find out that it could do, then find uses for it, first use would probably be scientific as part of learning its capabilities.

Yes giving somebody too much power is bad anyway however I doubt politicians are very interested in giving up power :)

But it depends on the approach that you take to make your IA. If you try to accomplish self aware, you will follow a path without limits or fixed structure or software.

Some supercomputers simul 1% of the human brain, all its neurons and multiple interactions, but that way is like try to develope an IA by brute force.. without idea what are you doing, just trying to copy what neurons do in a very different way with bits.

The brain work as an analogic machine, it works in base to chemichal responses, that is the way that evolution acomplish this. Is very efficient in power consumption, but it does not mean that is the only way.

Many people believe that due this it will take us still 30 years to reach the power needed, some thinks that we already had the potential

http://www.scientificamerican.com/article/computers-vs-brains/

But the true is that both make the wrong assumptions.

How much memory we have? I would said less than 1gb, the most important is how that info is related and storage.

I would explain how a computer may record things in a similar way.

It learns by different stimulli to recoignize some objects/concepts, but is not a total BMP + wav file + smell file, etc. Is a pattern of different stimulli under certain rules and shape (which each attribute was record before in similar way).

Once you have all those objects record, now you want to remember a moment.

The moment just save the memory locations of each object in the scene, under some different rules. Then the brain generates and simulate in real time all the missing data. That is how something as complex as a whole movie + our feelings watching that movie can be saved with so low memory.

When we try to force our brain to work in the binary way always fail, like try to remember 50 words. But people that are very good with memory, use techniques to relate each of those words with things he already knows under certain rules.

Is all about how the info is related, what neurons conections become stronger or how they re arrange by it self.

We dont need neurons or a similar process to accomplish the same thing, in the same way that cars does not need legs to move.

So it depends on a very complex algorithm that we dont know how to make.. but once we achieve that, we have the procesor power of a computer with the magic method of the brain. That is the singularity.

Edited by AngelLestat
Link to comment
Share on other sites

CPU power is not longer an exponential growth trajectory. Yes you can pack transistors closer and the device use less power than 5 years ago however its not many times faster, perhaps 30%

I'd like to note that COMMERCIAL CPU power is no longer exponential; but when you start using liquid nitrogen or start using really cool technology that breaks boundaries you never thought possible.

http://www.engadget.com/2012/10/06/amd-trinity-apu-overclocked-7-3-ghz/

7.3GHz, 3 years ago, with LN2.

http://www.researchgate.net/publication/239764303_fT__688_GHz_and_fmax__800_GHz_in_Lg__40_nm_In0.7Ga0.3As_MHEMTs_with_gm_max__2.7_mSm

fT = 688 GHz and fmax = 800 GHz

Basically, that means you'll get some loss at 688GHz and cut off at 800GHz; now there's propagation delay to account for with the actual logic circuitry; but you should be impressed here.

When we cram things closer together, we make it capable of performing well at higher clockspeeds, it mostly is just the heat issue that keeps us back; well that and expense.

Link to comment
Share on other sites

  • 4 months later...

Fears about the robot apocalypse are just our primate brains translating a complex problem into a simpler one. Yes, there's a robot apocalypse coming, but it's about replacing your job, not your government.

Similar fears about where medicine was heading after discovery of germ theory and anesthesia revolutionized the field, resulted in the novel Frankenstein. Now we get Terminator movies.

The real problem is that deep analysis of what people actually do at work indicates half of all jobs will be done by machines within a generation. That's half of all different types of jobs, representing far more than half of all workers! It's going to cause huge changes, but our civilization will adapt, same as always. We coped with agriculture, city life, the printing press, and the Industrial Revolution; we'll handle this one too. Still, radical change is always scary, especially to the half of humanity who are generally conservative and want society to hold still.

Link to comment
Share on other sites

Fears about the robot apocalypse are just our primate brains translating a complex problem into a simpler one. Yes, there's a robot apocalypse coming, but it's about replacing your job, not your government.

Similar fears about where medicine was heading after discovery of germ theory and anesthesia revolutionized the field, resulted in the novel Frankenstein. Now we get Terminator movies.

The real problem is that deep analysis of what people actually do at work indicates half of all jobs will be done by machines within a generation. That's half of all different types of jobs, representing far more than half of all workers! It's going to cause huge changes, but our civilization will adapt, same as always. We coped with agriculture, city life, the printing press, and the Industrial Revolution; we'll handle this one too. Still, radical change is always scary, especially to the half of humanity who are generally conservative and want society to hold still.

STEM is important, people don't realize that to stay relevant they need to plan for the future. Highly repetitive tasks are open for replacement, the use of robots to pick up garbage etc, but fields might be added such as separating trash into green items and things that go into the heap. With global warming you could have tree planting robots that replace humans in dangerous jobs like coal mining, or ramp workers in airline industry.

Link to comment
Share on other sites

Fears about the robot apocalypse are just our primate brains translating a complex problem into a simpler one. Yes, there's a robot apocalypse coming, but it's about replacing your job, not your government.

Similar fears about where medicine was heading after discovery of germ theory and anesthesia revolutionized the field, resulted in the novel Frankenstein. Now we get Terminator movies.

The real problem is that deep analysis of what people actually do at work indicates half of all jobs will be done by machines within a generation. That's half of all different types of jobs, representing far more than half of all workers! It's going to cause huge changes, but our civilization will adapt, same as always. We coped with agriculture, city life, the printing press, and the Industrial Revolution; we'll handle this one too. Still, radical change is always scary, especially to the half of humanity who are generally conservative and want society to hold still.

Fact: We enter in self machine learning age, it will take only 10 to 20 years for this machines to become better than us in any aspect, after that point, few years means hundred of times better than us, one more year thousands of times better than us, and go on.

Not many people will lose their job, because the change will happen so fast, that it would not be time to find tasks for the new tech.

PD: I am not talking about humanoids, a new algorithm does not need a body to improve a design faster and better than us.

Link to comment
Share on other sites

Fact: We enter in self machine learning age, it will take only 10 to 20 years for this machines to become better than us in any aspect, after that point, few years means hundred of times better than us, one more year thousands of times better than us, and go on.

Not many people will lose their job, because the change will happen so fast, that it would not be time to find tasks for the new tech.

PD: I am not talking about humanoids, a new algorithm does not need a body to improve a design faster and better than us.

How fast did it take production to move to china. 1980 china went from a closed country, 2015 it is now the largest manf in the world.

if you have a country, and the goal of that country was to suuport its population by taking over the largest production capacity. It would not need a large population, just have an excess of ports. lets say the ukraine decided that the y were going to turn as many industries as robotic as possible, relatively low paid workers maintaing the bots. Extensive oversees borrowing, jobs could disappear say in china as production moves to a new location. Ok so now replace Ukraine with russia, which has ports on the Pacific, Black sea, Baltic, Arctic. Decent resources and oil. So its not likely the would hurt badly US or European, but such a state could do major dange to deveolping economies such as china, malaysia, India.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...