Jump to content

If self-interested robots had a political party...


nhnifong

Recommended Posts

They might develop a need to have progeny in order to raise someone.

Indeed they might. It may be an emergent property of some other part of our psychology that would desire to see implemented in an AI, such as empathy.

That is a right of a human being, where human actually stands for our special characteristics and transcends to any being with self awareness and intelligence. As soon as they start exibiting human-like characteristics, becoming full persons, they axiomatically gain the basic rights that stem from ethics. Right to live, and other rights from the UN Declaration.

We're certainly going to have to decide what "human" means. I'm comfortable with the idea that in the future human civilisation will include non-biological beings, who are nonetheless still human in outlook, goals, and psychology. In fact I'd put money on the first humans to achieve milestones like interstellar flight or making first contact with an alien civilisation inhabiting bodies made of metals, polymers and silicon rather than flesh and bone. Spacefaring AIs will be humanity's ambassadors to the stars.

Link to comment
Share on other sites

I think that human life, or personhood, is not inherently valuable. It can be subjectively valuable to other people which is only reason it is ever protected (by others). We would protect the personhood of artificial beings only insofar as it is valuable to us, just as we only protect our own personhood insofar as it is valuable to us. For a vivid example of this discretion applied to the self, observe the behavior of anyone who is depressed.

Link to comment
Share on other sites

Why not Dave? They would be useful for a lot of things, such as my example above of space exploration.

It would also make for a better man-machine interface, imagine a library curated by an AI that you could actually talk to about the kind of books you liked and have it understand your needs and feelings. The aim would be to make machines more like us, which would be extremely useful.

There would still be a place for menial robots that lacked full intelligence, nobody is suggesting a robot forklift would need to have hopes and dreams to be good at picking up boxes.

Edited by Seret
Link to comment
Share on other sites

The problem with assigning rights of individuality to an artificial intelligence is that you have to put your line somewhere, defining what is sentient and that which is just automaton is tricky.

Do we apply these rules to other self replicating systems like dolphins, whales, dogs, cats or humans?

There are plenty of biologicals out there who fail basic intelligence tests, do we relabel them as lifter entities and strip them of their rights?

Link to comment
Share on other sites

I think that human life, or personhood, is not inherently valuable. It can be subjectively valuable to other people which is only reason it is ever protected (by others). We would protect the personhood of artificial beings only insofar as it is valuable to us, just as we only protect our own personhood insofar as it is valuable to us. For a vivid example of this discretion applied to the self, observe the behavior of anyone who is depressed.

Nothing is inherently valuable. Value is a human concept. Morality, values, all that is made up by people, but that doesn't mean it's crap.

Without anyone to care about something, there isn't anything to care about.

As soon as a sentient, intelligent, self aware, empathic being comes into existence, it establishes (or develops) values. It cares for those values, and that's the only important thing here.

There would still be a place for menial robots that lacked full intelligence, nobody is suggesting a robot forklift would need to have hopes and dreams to be good at picking up boxes.

This looks like a scenario for another great Pixar movie. :D

Link to comment
Share on other sites

We're certainly going to have to decide what "human" means. I'm comfortable with the idea that in the future human civilisation will include non-biological beings, who are nonetheless still human in outlook, goals, and psychology. In fact I'd put money on the first humans to achieve milestones like interstellar flight or making first contact with an alien civilisation inhabiting bodies made of metals, polymers and silicon rather than flesh and bone. Spacefaring AIs will be humanity's ambassadors to the stars.

I think it would be a grave error to assume that AIs will think like humans do. Even if they appear to behave in a human fashion, that may just be a survival reflex. (AIs that behave in unpredictable ways would probably be shut down.) And while AIs may be our ambassadors to the stars, it is equally possible that flesh-and-blood humans won't be around to see it. And, they may not be the best ambassadors, read the Berserker stories by Fred Saberhagen for an example.

On the whole, I think developing AI is fraught with peril, and certainly more risk than reward. However, as others have pointed out, it is almost inevitable. As Michael Crichton put it in Jurassic Park, there are too many people in the world who focus on if they can do something rather than if they should do something.

Link to comment
Share on other sites

I think it would be a grave error to assume that AIs will think like humans do. Even if they appear to behave in a human fashion, that may just be a survival reflex.

I think it's highly unlikely they would think in exactly the same way, but it's behaviour that counts. The whole point of AI would be to create a machine that presents a human face, so that we can understand each other and cooperate in an intuitive way.

On the whole, I think developing AI is fraught with peril, and certainly more risk than reward. However, as others have pointed out, it is almost inevitable. As Michael Crichton put it in Jurassic Park, there are too many people in the world who focus on if they can do something rather than if they should do something.

There are also those whole will have a definite incentive to make one. I'm pretty sure that the first machine intelligence we'll consider to be truly comparable to our own will come out of the financial sector. They've got large R&D departments, some of the brightest minds, loads of money and are refining highly capable software agents for trading right now. A lot of the trading that goes on in the markets is already automated. There's a lot of money to be made by making their algo systems as smart as possible. The ability to understand humans and integrate information from as many sources as possible (including natural language, images and video) would give them a competitive edge.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...