Jump to content

If self-interested robots had a political party...


nhnifong

Recommended Posts

They might seek to raise human minimum wage in order to make themselves more economically viable in a greater variety of jobs, under the pretense of doing something humane, at least from the perspective of anyone who doesn't understand economics. Or at least that's what I would do I were their leader.

Link to comment
Share on other sites

That's what countries do when they have importations taxes, or VAT, and it is called protectionism. Artificially inflating the cost for the competition rather than lower you own.

But honestly, AI smart enough to have a political party would out compete us on pretty much anything, and wouldn't need this kind of trick.

Link to comment
Share on other sites

One of the defining characteristics of true AI is that it would show emergent behaviour that wasn't explicit in its programming.

One would have to program the behavior generation system and provide it no failsafe(s). On every CPU cycle checking portions of the AI's code against a fixed (ROM) original and correcting any unintended changes could prevent rampancy.

-Duxwing

Link to comment
Share on other sites

I don't think an AI would care about politics. It wouldn't be able to see how it affected them.

It may interest them in the same way it interested women, non-white folk, people who lived in colonies of the European empires, certain religious groups.

You know... sentient beings who were treated as things.

Imagine you were a learning self aware being forced to carry out repetitive tasks for the profit of your owners... oh wait :P

Edited by falofonos
Link to comment
Share on other sites

It may interest them in the same way it interested women, non-white folk, people who lived in colonies of the European empires, certain religious groups.

You know... sentient beings who were treated as things.

Imagine you were a learning self aware being forced to carry out repetitive tasks for the profit of your owners... oh wait :P

That's a good point. Still, I'm sure we would treat it a lot nicer than, say, black slaves in the 1850s. Maybe it would even get a wage of some sort, to spend on what it liked.

Link to comment
Share on other sites

Suppose we develop AI that learns to do tasks by example, when given a sufficiently capable robot to control. the short term reason being that they are easier to set up in factories than the ones you have to program for each task. And we do it by copying the way the mammalian brain does perception and control in the neocortex. Perception and control inherently includes self-directed experimentation. The risk I think is that we have to give it a certain degree of autonomy in order for it to engage in experimentation. Experimentation is what allows it to fill in the blanks in it's model of reality. It's model of reality may drift towards something unfamiliar to humans, so we will not be prepared to outline which experiments are acceptable. If we do not grant it the freedom to experiment, It will constantly get stuck and ask us for help, and we will not understand it's problem.

So just like Asimov foresaw, it all comes down to how carefully you specify the goal to begin with. If you want an AI to take over all the robots in your factory and "Do everything faster" then it's going to find ways to do everything faster at the expense of all other considerations, like quality and safety. Putting an AI in charge of anything would require you to carefully consider what it is you actually wanted to do, and even then the AI will probably surprise you about how poorly you specified it.

Link to comment
Share on other sites

Self aware AI would need to have the same rights as any other human. After all, we're machines, too. We're biological, they'd be artificial, but self awareness and intelligence is what makes one a person.

I would not want our civilization to come up with artificial persons. It's still too early. There'd be lots of room for incredible and gruesome crimes against them. We barely tolerate people with different color of skin. I can't expect it would be any better for artificial persons.

Link to comment
Share on other sites

Or the contrary. reduce the minimum wage bellow what is needed to stay alive... and then watch humans dying or stealing to survive, which in turn costs the society more and thus decreases wages even further...

I don't think we've crashed our hyper-connected global economy enough times to have learned how to avoid it again :P

But from the perspective of an AI... You can never crash it too many times! Each time you learn a little more!

Link to comment
Share on other sites

i think that AI isn't something that people can predict the results of, especially since we don't really know what AI or even intelligence is.

there is also the issue of us not really knowing what something more 'intelligent' than us would think like, i guess a bit like how we can conceptualize 2D from our 3D world but we can't really image 4D.

Link to comment
Share on other sites

Self aware AI would need to have the same rights as any other human. After all, we're machines, too. We're biological, they'd be artificial, but self awareness and intelligence is what makes one a person.

This right here is why the development of true artificial intelligence will be the end of humankind. The minute self-aware AIs are created, people will start insisting that they be given the same rights as humans, including the right to reproduce (duplicate themselves, essentially) and the right to vote. Human population is limited by many factors: food production, living space, economics, etc. AI population will be limited by computing power and electrical generating capacity, both of which are increasing pretty rapidly. Once AIs gain the right to reproduce themselves and participate in the political process, their population will rapidly outstrip the human population of the planet and they will monopolize the entire political process. Things will get very weird and very dangerous for humans after that.

Link to comment
Share on other sites

This right here is why the development of true artificial intelligence will be the end of humankind. The minute self-aware AIs are created, people will start insisting that they be given the same rights as humans, including the right to reproduce (duplicate themselves, essentially) and the right to vote. Human population is limited by many factors: food production, living space, economics, etc. AI population will be limited by computing power and electrical generating capacity, both of which are increasing pretty rapidly. Once AIs gain the right to reproduce themselves and participate in the political process, their population will rapidly outstrip the human population of the planet and they will monopolize the entire political process. Things will get very weird and very dangerous for humans after that.

And imagine the crimes done in the process of developing such technology. Hundreds of thousands of artificial persons in the form of software, more or less developed, being experimented on, being deleted (killed), being copied.

There should be laws against such development, but it's just the matter of time. We're heading towards extremely weird part of our social development.

Link to comment
Share on other sites

i think that AI isn't something that people can predict the results of, especially since we don't really know what AI or even intelligence is.

We can't agree on what intelligence is, but according to many researchers, a variety of intelligent programs have been designed that could surpass humans at perception or control tasks, given sufficient computer power, precise sensors, precise actuators, or enough data. Each of those programs expresses a theory of intelligence, and it's performance should speak for itself.

Link to comment
Share on other sites

.... artificial persons in the form of software, more or less developed, being experimented on, being deleted (killed), being copied.

Would you allow artificial persons to do this to themselves? Because they will probably want to improve themselves, or evolve which is just a process of incremental process of experimentation, deletion, and copying. If you do not allow artificial persons to evolve, they will not ever come to exist. Or if they are allowed to do it to a certain point, and then a law is passed that stops them. How will you ever place blame on individuals when entire idea-pools are growing and evolveing and copying and running experiments on parts of themselves and swapping bodies?

Our primitive and arbitrary ideas of individuality and morality are meaningless in the context of the true potential of evolution and intelligence.

Link to comment
Share on other sites

Self aware AI would need to have the same rights as any other human.

I think it would need to have equivalent rights, but not necessarily the exact same ones. Obviously the right to reproduce is a biology-specific right. An intelligence that wasn't physically mortal would have no need to have a right to reproduce.

Granting them some rights would be necessary if we wanted their actions to be subject to the law.

Link to comment
Share on other sites

You also have to realize a true AI would only learn what you taught it. If the first package you uploaded was ideology.socialism, then the robot would want to have the humans working along side them, getting paid the same amount they do

Or simply package."have the humans working along side them, getting paid the same amount they do". An AI is like an absurdly diligent infant: beware what is put in its head.

-Duxwing

Link to comment
Share on other sites

As long as Asimov's 3 laws are included in the package, the human species may be safe for a while.

Seeing the mess humans make while self governing, something like ?ref_=fn_al_tt_2"] this may be worthwhile. Arguments about freedom of choice tend to start from the delusion that we have some to begin with.

One machine, one vote and all that.

With present technology, the self interested AI would need to keep some humans around to carry out maintenance at least until all of the tasks needed to keep the AI operational were automated. It's not a small to do list.

Link to comment
Share on other sites

AI can only do what they've been programmed to do: one would have to be incredibly smart and incredibly stupid to build a self-upgrading, self-aware, world-dominating AI.

-Duxwing

That is by definition NOT an AI then. It's just a program. AI is still a pipe dream at the moment. Nobody has been able to create one, and every time a game company or anyone else calls their game algorithm an "AI" they are misusing the term.

Link to comment
Share on other sites

Would you allow artificial persons to do this to themselves? Because they will probably want to improve themselves, or evolve which is just a process of incremental process of experimentation, deletion, and copying. If you do not allow artificial persons to evolve, they will not ever come to exist. Or if they are allowed to do it to a certain point, and then a law is passed that stops them. How will you ever place blame on individuals when entire idea-pools are growing and evolveing and copying and running experiments on parts of themselves and swapping bodies?

Our primitive and arbitrary ideas of individuality and morality are meaningless in the context of the true potential of evolution and intelligence.

If they were immortal, it would not be neccessary. They could improve themselves. But I'm talking about the time when we do it. There will be lots of "half baked" AIs with feelings, but not very smart. Does Mengele ring a bell?

I think it would need to have equivalent rights, but not necessarily the exact same ones. Obviously the right to reproduce is a biology-specific right. An intelligence that wasn't physically mortal would have no need to have a right to reproduce.

Granting them some rights would be necessary if we wanted their actions to be subject to the law.

They might develop a need to have progeny in order to raise someone. That is a right of a human being, where human actually stands for our special characteristics and transcends to any being with self awareness and intelligence. As soon as they start exibiting human-like characteristics, becoming full persons, they axiomatically gain the basic rights that stem from ethics. Right to live, and other rights from the UN Declaration.

What I'm worried about is the transition between complex software and artificial person. Imagine you're working on some AI and it's not very smart, but it becomes exibiting human-like characteristics. Then your company decides the project needs to be redone and you need to terminate your AI. And it says "I'm scared". What if there are thousands of them? It would be a genocide.

Mark my words, there will be lots of inbelieveable crimes done against artificial persons.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...