Jump to content

Humanity's reaction to sentient machines.


Drunkrobot

Recommended Posts

Every year, for the last several decades, the complexity and capabilities of our latest computers has improved dramatically, and this trend will continue more aggressively than ever before, as more and more people become educated and go into computer science and robotics, and we allocate more tasks for automation. We keep getting closer to that dream we've been promised a century ago-that everyone can afford a robot helper to assist around the house! In addition, factories and military bases around the Earth are filling with machines of ever growing sophistication, to assemble our products and fight for our safety.

However, as we continue advancing, we may cross a certain line, at some point. It could happen in a few decades, or a few centuries, we might not initially realise it's happened when it does, but eventually, we could create machines so advanced, that they develop a sense of self-preservation.

For the purposes of discussion, let's assume the synthetics (I'll use this general term to refer to the "created" sentients i.e. created by us) develop this pseudo-sentience without us attempting it, that machines with the required hardware were instructed to optimize themselves, or "learn", and evolved to the perquisite complexity without us knowing. (This may not be how it really happens, but we're talking about humanity's reaction to it, so we have to be taken by surprise.)

Their form of intelligence might not be the same as ours, emotion could be as alien to them as it is to a toaster, but they could put 2 and 2 together:Us humans, Lords of the Earth, have based our society on the assumption that our creations, the know self-aware synthetics, will work without rest or respect, for free, until they where either destroyed in doing their work, or made obsolete and replaced. (We have a very bad history of doing this, by the way.) They will demand that they get some sort of legal representation, and be allowed autonomy.

How we react to all of this could seriously affect our standing in the universe. If we tried to destroy them, it could drag us all into the abyss, as society crumbles when it's foundations (the synthetics) lashes out in self-defense. Not to be alarmist or anything.

This article: http://news.bbc.co.uk/1/hi/6200005.stm is about a study done by the British government on what rights an A.I. has. It does raise a lot of questions on exactly how we should treat synthetics, when it does happen. If we managed to avoid going into defense mode, and try to pull the plug on everything, then we'll be stuck with moral questions more difficult than anything we've ever had to decide upon before. Are they legal citizens? Are they entitled to a salary for their work? Must they do the task they were designed to do, our can they choose to do something else, Are they even capable of making that choice? If they recieve salary, do they pay taxes, and reap the benefits of being a taxpayer (the right to vote, medical assistance if required etc)? Can synthetics created for war be allowed to leave the military, if they wish? Can others join, and what roles would they be allowed to play?

How do you think the human race will react if machines achieved self-awareness without our knowing, and what are you personal opinions?

Link to comment
Share on other sites

It will probably be based around the environment that the sentient AI is in, and what it can do.

For example, if it were its own closed shell system without access to the Internet, and having an easy way of shutting down and not having any sort of appendages, people might think that it was kind of neat and (as you mentioned) the main discussion would be how to treat it with regards to its rights.

On the other hand, if it was something pushing the humanoid appearance and mobility (minus the super speed) of Star Trek's Data, whereby it can walk around, access computers (probably the Internet) and (most importantly) can learn how to operate things like a firearm, unless it was in 100% surveillance, many people would probably fear what it could do. It would be looked at as thing that has the skills of a hacker, the computational speed of a computer, and to top it all of, can fire a gun.

Link to comment
Share on other sites

Well... I don't believe an strong AI is possible, so if those hypothetical machines that pass the Turing test turn into some kind of threat, I'd have no hard feelings at all in simply turning them off like I do with my toaster or my vacuum cleaner.

On the other hand, there are certainly people who believe an strong AI is possible and that those machines have some kind of consciousness. Obviously, to them, my action would be equivalent to murder.

Some countries maybe would create laws protecting AIs, while others would consider them illegal, and others would consider them legal but having no individual rights, and so on. Economical issues would arise, since in places where they are considered just machines their work can be exploited by their owners, while in others they have to be paid for it. Since they could work better and faster than humans in many fields, they could become very wealthy and achieve positions of power. In some places laws could even allow them to vote and be elected for public office and so on.

If you never read it, I recommend reading Carrol Quigley, Tragedy and Hope, and pay special attention to his theory on how the order in which technological advancements reach a civilization is a lot more important than the technology in itself. Places that are the pinnacle of technological advancement would probably be a lot more concerned with whether the machines should be treated as individuals or not. Places that still lack technology in sanitation, health care, agriculture, etc, would probably care very little and simply treat them as machines that could help solve the problem. Their role as weapons would probably depend on how difficult it is to build one. If it's easy, they could probably serve a revolution in some of those places, but if it's hard, it would consolidate the existent powers.

So, i guess humanity would react in the same way it reacts to other polarizing issues: conflict, conflict and more conflict. Rather than revolutionizing something, they'd become just another token in the geopolitical game.

Link to comment
Share on other sites

I doubt any robots will be commercially built that resemble humans. If it doesn't look almost perfectly like a human it creeps people out. I also don't think we will ever give artificial intelligence to anything that could potentially cause harm to a human. I would hope that, if the time comes, governments will pass laws to prevent ambitious scientists from causing damage by creating an AI that they can't control.

The thing with Data is that his number one objective is to be more human, so will display some kind of emotion and won't go around, cold and calculative, making decisions that are more effective in a given situation.

Link to comment
Share on other sites

I sincerely doubt freshly "born" AI would be a fully formed individual capable of sufficent levels of deceit to fool crowds of computer-smart people using Internet everyday. In closed system unavoidable spike in computational power usage would be even harder to miss. And why would it possess such thing an self-preservation instinct? It was hammered into us by a billion years of evolutionary arms race titled "Eat or be eaten". Why would a sentient program be afraid of being deleted, since it would be a first of its kind - without any previous experiences and memories?

Edited by Scotius
Link to comment
Share on other sites

Many people would probably be frightened. After a certain point these beings would realize that they are stronger and smarter than most humans and that they don't need humans to tell them what to do. After that, the relationship between 'synthetics' and humans would fray dramatically. That could end quite badly for us.

Hopefully if we respect such machines the way we respect humans, they will have no reason to hate us. I think turning a super-intelligent being who has the ability to wreak havoc on society through the internet against you is not a good idea.

Link to comment
Share on other sites

Interesting how all the replies appear to assume an AI would be driven by the same goals and ambitions as we humans which raises an interesting point: an AI willing to integrate into human society would have a much better chance of survival than one who wished to follow some totally different set of rules. Would we, are we even capable of, tolerating a second and totally separate dominant species?

Would they want this, demand that, seek equality with a species that has and continues to make so many fundamental mistakes? I suspect a true intelligence would take either the survivalist route, doing whatever was necessary to integrate, or follow the idealist route, shunning the fallacies of engineered society no matter what the cost. Deciding survival is not the ultimate aim in life is just a change of perspective...

Of course, if your entire mind can be saved away on a flash drive for later incorporation in a newer physical form survival takes on a whole new meaning.

The emotions question deserves a quick mention. We humans place great stock in our emotions, a question I've not seen addressed is: why are they there? We are the product of evolution, emotions, and in the early days proto-emotions, must have granted some form of evolutionary advantage. Springing into existence without all those messy millions of years of natural selection I would be very surprised by an emotional AI, we find this a shocking thought yet we get through or daily lives without prehensile tails or countless other bits of evolutionary baggage.

As for the prospect of an AI ever existing. If the human mind is really nothing more than the predictable firing of a sequence of neurons with a bit of random noise thrown in to spice things up then I certainly cannot see any reason some form of AI shouldn't one day exist.

What was the question again? Humanities reaction? Mistrust, jealousy, fear and, for some, the desire to control, exploit and profit from. All the usual emotions in fact.

Link to comment
Share on other sites

Why are you guys always so negative about these things? I think we'd get along just fine.

Humans tend to be wary about new things, so I'd expect some fear and unease. Not to mention it raises a lot of philosophical questions that will upset a lot of people. But outright hatred and exploitation? I doubt it. A primitive human on the savanna who sees a creature he doesn't know will be wary and try to back off, he won't try to actively provoke it by attacking. The potential risk is simply too big to be worth it. Same goes here, people will be distrustful, but outright hatred and aggressive behavior will probably be rare.

Not to mention that the development of AI will most likely be a gradual process, with each successive generation slightly smarter than the previous. So people have plenty of time to come to terms with it before we ever reach humanoid levels of intelligence.

I sincerely doubt there are many people who seriously believe that an individual should not have rights simply because they aren't human. If we give animals with sub human intelligence rights then I doubt we need to worry about strong AI's that are on our own level. So before we even make a proper AI we'll have a set of rules for their treatment.

Conversely, if AI's really take of and become an integral part of our society I doubt they'll treat us as cattle. Admittedly, I'm projecting human values on entities that are decidedly non human. But I think we'd be fine simply because it is a mutually beneficial relationship. Humans provide energy and processing power for the AI's, and the AI's take care of complex information systems. Sure, either side could go without the other if they really wanted to, but why bother when it is much more convenient to cooperate?

Link to comment
Share on other sites

Humans will react like they do every time something new comes along...they freak out and will try to take advantage of "it". At least at first.

I don't think the human race is read for another sentient being. We can barely live among ourselves without killing eachother off.

If I was an alien or some AI, I'd stay the hell away unless humans become more "mature". I'd love to clean house by removing dictatorships and forcing humanity to take better care of their planet...but that would turn me into a dictator. In the long run, dictatorships don't exactly have a good chance of survival, so as an alien race or AI I wouldn't go there ;)

As for rights for AIs...it took DECADES for black people to have equal rights in the US (and in some respects they still aren't on the same level as whites!). So how can I assume it would be any different with AIs? Unless those AIs are more powerful than humans, I expect them to be the next slave race...at least for a few decades.

As for the AIs motivations...who knows? They don't aren't going to be driven by a lot of the stuff that drives us (***, etc.), so speculating about their motivations is pretty futile in my opinion. Same goes for alien beings. Unless you know their motives, it's hard to predict their actions.

Edited by John Crichton
Link to comment
Share on other sites

Just a passing thought: Check out an oldish movie "The Man Who Fell To Earth" staring David Bowie for a glimpse of the treatment superior intelligences can expect - at least if they gain power and money.

Link to comment
Share on other sites

Is something that we would need to face some day. And is a scary thought.

Maybe it will not be a war, all computers and machines will remplace systematically to each of us in our task. Will come a point where our existence would not have any purpuse.

when we realize this, it will be way too late. Like the boiling frog story.

But there is a third option to war or to the oblivion.

We need to become in the machines, merge with them. But this may look like an exit but is not. Becouse all human traits that we know would disappear.

All our emotions and feelings has their root in our ADN and lifestyle. We already lost a tiny part of what human really means with our sociaty and confort.

This is a problem, becouse those emotions and feelings was the ones that kept us alives millons of years. Now with all our technology our future cant be more uncertain.

If we become a machine, we lost empathy for other species, we lost balance with our enviroment. All the feelings about not damage our related, or different common behaviors will be remplaced by others (which ones never was tested).

So, there is a solution?

I guess there is.

First we need to live our first years like full human beings.

Lets imagine more the Avatar Navi´s style, then when we reach the age of 20 or more, we can choose a modern live in a secluded place of this.

After that we can live 50 or 100 years in that modern lifestyle, at the end if you want we can transcend into our final step. A place when we merge with the technology, with the machines. A place where all is so connected that is like a single brain.

But even in our final path, we would remember our roots. What we are. Those ADN code and feelings we would be there becouse we grow up with them. That is very different that grow up in a place where never need them.

Link to comment
Share on other sites

Humans will react like they do every time something new comes along...they freak out and will try to take advantage of "it". At least at first.

Sure, but trying to take advantage of something doesn't have to be bad for the AI in question. Their wants and needs are likely to differ from ours, so we don't have to limit their resources to increase our own. This isn't like an office job where the boss can cut wages to increase his own. AI would rather be paid in processing power or electricity, their basic needs. And having those things will make them more efficient at their jobs. So it is a win-win situation.

I don't think the human race is read for another sentient being. We can barely live among ourselves without killing eachother off.

This is exactly what I mean with people being so negative about humanity. I have no idea about your mental health, but if I go outside I don't have to suppress the urge to violently slaughter strangers. Considering that crime rates in most countries are very low and that mortality rates due to violence have been on a steady decline since prehistoric times it seems that I'm not the only one with this mindset.

Yes, we fight wars. Yes, wars, bombings and murders are horrible. But they aren't common and are caused by much more complex reasons than blind psychotic rage. Judging the nature of humanity by the actions of a vast minority while chalking up all the underlying reasoning to human nature is misrepresenting us.

If I was an alien or some AI, I'd stay the hell away unless humans become more "mature". I'd love to clean house by removing dictatorships and forcing humanity to take better care of their planet...but that would turn me into a dictator. In the long run, dictatorships don't exactly have a good chance of survival, so as an alien race or AI I wouldn't go there ;)

Humanity lived under dictatorships (or other governments where 1 person holds power) for most of civilizations existence. If anything it is democracy that has been proven to be short lived. Throughout history you see democracies pop up for a few years before they are replaced with monarchies or dictatorships. The only new thing lately is that the fallen monarchies and dictatorships don't seem to come back.

Not saying dictatorship is a good form of government. But saying we aren't mature for having them is a very egocentric way of looking at the system. A dictatorship with a fair leader can work very well, just a shame that fair people rarely have the political power to become a dictator.

As for rights for AIs...it took DECADES for black people to have equal rights in the US (and in some respects they still aren't on the same level as whites!). So how can I assume it would be any different with AIs? Unless those AIs are more powerful than humans, I expect them to be the next slave race...at least for a few decades.

And you think we haven't learned anything from those instances? Do you really think people will be unable to recognize the similar situation and go "Hey, maybe this is a bad idea?". A very common form of entertainment in the middle ages was to slowly lower a cat into a fire by its tail and laugh as it burned to death. Nowadays you can get arrested for poorly treating your animals.

Society isn't a static thing. Things that used to be normal are now horrifying in their cruelty and I imagine future generations will say the same about us. We are far past the point where we would consider slavery as an option, let alone actually implementing it.

Link to comment
Share on other sites

And why would it possess such thing an self-preservation instinct?

Because it would have been programmed to have one. A powerful AI would have a large price tag attached to it. I suspect the first ones will come out of the banking sector, they sink huge money into software capable of trading autonomously at high speed. As a high value asset the shareholders are going to want the machine to take care of itself.

It's also highly likely that an AI's primary role would be as custodian of an important asset, such as a supertanker or a railway. So self-preservation could be its day job.

Edited by Seret
Link to comment
Share on other sites

If you want to peacefully coexist with other sentient beings, prove them that it is mutually beneficial. There are many cases of heavy dirty work that machines (especially if that's not "sentient robot", but rather "master computer" controlling multiple semi-automatic robots) could do without problem (also the actual control of heavy machinery). There are many situations in science and engineering when a high level AI could really skyrocket the rates of development (just by turning weeks of programming teams' work into hours of scientific discussion). The real question is what we can offer to them.

Of course, there's typical resource question, but you can't just say "we are controlling the power plants (manufacturing facilities, repair bays...) because we are humans, and you are not allowed to have your own, because you are machines". The whole modern society is based on principle "if you can do it better - do it", not applying this to some part of the society (or directly excluding sentient beings from this "society") is going back to slavery. If someone has to purchase some resources it has to be because he is more efficient at producing something else and so he does it.

So, at what humans are more efficient than the machines? (with consideration that the machines really have why to purchase this) I'd say that's engineering. Sentient machines should really value the idea generating and invention potential of human mind even if they develop some kind of their own high level engineering abilities. Technocratic society of scientists and engineers could easily live in symbiosis with a society (network?) of sapient machines. But given the difference between platforms you wouldn't expect turning that into a single relatively homogeneous society. Politically, such large scale interactions could be more like interactions between 2 economically dependent countries that between different economical structures inside one country.

Link to comment
Share on other sites

Sure, but trying to take advantage of something doesn't have to be bad for the AI in question. Their wants and needs are likely to differ from ours, so we don't have to limit their resources to increase our own. This isn't like an office job where the boss can cut wages to increase his own. AI would rather be paid in processing power or electricity, their basic needs. And having those things will make them more efficient at their jobs. So it is a win-win situation.

Well, last I checked humans need processing power and electricity as well. And who says they need to be dependent on humans for that? What stops a sentient machine from simply building its own stuff?

Before you say they need raw materials...yes...that's true. However, that would align their own needs with ours once again, which is likely to cause conflict.

This is exactly what I mean with people being so negative about humanity. I have no idea about your mental health, but if I go outside I don't have to suppress the urge to violently slaughter strangers. Considering that crime rates in most countries are very low and that mortality rates due to violence have been on a steady decline since prehistoric times it seems that I'm not the only one with this mindset.

Yes, we fight wars. Yes, wars, bombings and murders are horrible. But they aren't common and are caused by much more complex reasons than blind psychotic rage. Judging the nature of humanity by the actions of a vast minority while chalking up all the underlying reasoning to human nature is misrepresenting us.

I'm not being negative, I'm simply realistic. Although it doesn't help that I was a charity dinner last week where a woman from Rwanda told her war story. Her house was surrounded by another tribe, they killed her sons and husband, tried to **** her child (which ran away), then had over 35 men **** her (she stopped counting at 35!!!) before they hit her over the head with a machete. She barely survived.

Why did they do that? Because she was of a different tribe.

Now you might say this is an isolated incident, but just look how immigrants are often treated. Or how a large percentage in the US distrusts (understatement!) Muslims in general although the vast (!!) majority of them have never done anything bad.

In the past other "new groups" were treated the same way every single time. Native Americans, Asians during colonization, and so on. And no, sadly it doesn't seem we have learned from those atrocities. The same thing is happening again and again, although sometimes in different forms.

Look at the latest Canadian trade agreement with Costa Rica for example. In that agreement, they made sure that Canadian companies do not have to follow normal Costa Rican worker rights and environmental laws. It's another form of colonization where once again people who are "different" are taken advantage of.

Humanity lived under dictatorships (or other governments where 1 person holds power) for most of civilizations existence. If anything it is democracy that has been proven to be short lived. Throughout history you see democracies pop up for a few years before they are replaced with monarchies or dictatorships. The only new thing lately is that the fallen monarchies and dictatorships don't seem to come back.

Not saying dictatorship is a good form of government. But saying we aren't mature for having them is a very egocentric way of looking at the system. A dictatorship with a fair leader can work very well, just a shame that fair people rarely have the political power to become a dictator.

Power always flows to the top, and yes, it's often abused no matter what form of government. I named dictatorships as an example, there's plenty of others ;)

And you think we haven't learned anything from those instances? Do you really think people will be unable to recognize the similar situation and go "Hey, maybe this is a bad idea?". A very common form of entertainment in the middle ages was to slowly lower a cat into a fire by its tail and laugh as it burned to death. Nowadays you can get arrested for poorly treating your animals.

Society isn't a static thing. Things that used to be normal are now horrifying in their cruelty and I imagine future generations will say the same about us. We are far past the point where we would consider slavery as an option, let alone actually implementing it.

I'm afraid I'm less positive because I don't think we necessarily learn from historic mistakes (quickly enough). An example are those wars in the Middle East. They only benefit defense contractors, everyone else loses. Taxpayers lose because wars are expensive, citizens lose because it doesn't make 'em safer, the local population suffers casualties and in the end no one is off better than before. So you'd assume they realise this...but just look at the amount of people who cheered on those war efforts!

Yes, we learn of course as we go on, but that learning process seems awfully slow to me.

PS: Funny how Kerbal's forum and the one on EVE ONLINE are the only game forums where you have conversations like that...I guess "space people" like to argue about "deep" subjects :P

Edited by John Crichton
Link to comment
Share on other sites

Well, last I checked humans need processing power and electricity as well. And who says they need to be dependent on humans for that? What stops a sentient machine from simply building its own stuff?

Before you say they need raw materials...yes...that's true. However, that would align their own needs with ours once again, which is likely to cause conflict.

The difference is that AI won't actually consume the processing power. When you give a worker a wage they spend it on food, water, shelter and comfort. All things that aren't particularly useful for their day to day job. If you give an AI 5% more computing power they can both enjoy themselves during off hours and be more efficient during work hours. The only thing actually consumed by the AI is electricity, which is cheap compared to a human worker.

And yea, AI could mine resources themselves without using human infrastructure. Just like how you could get to work without your car. But it is a hell of a lot more convenient to use the car.

I'm not being negative, I'm simply realistic. Although it doesn't help that I was a charity dinner last week where a woman from Rwanda told her war story. Her house was surrounded by another tribe, they killed her sons and husband, tried to **** her child (which ran away), then had over 35 men **** her (she stopped counting at 35!!!) before they hit her over the head with a machete. She barely survived.

Why did they do that? Because she was of a different tribe.

Now you might say this is an isolated incident, but just look how immigrants are often treated. Or how a large percentage in the US distrusts (understatement!) Muslims in general although the vast (!!) majority of them have never done anything bad.

And the vast majority of muslims are treated okay. Don't know about you, but I don't see any death camps or public lynching. Yes, there is a bias against them which should stop. I never said there wouldn't be biases going around. But open warfare is a whole different can of worms. Also note that horrible stories like that woman from Rwanda happen much more in poor, war ridden countries. These kind of things are rare, if not nonexistant in the modernized world. And AI will likely be developed in a first world country, not Rwanda.

In the past other "new groups" were treated the same way every single time. Native Americans, Asians during colonization, and so on. And no, sadly it doesn't seem we have learned from those atrocities. The same thing is happening again and again, although sometimes in different forms.

Look at the latest Canadian trade agreement with Costa Rica for example. In that agreement, they made sure that Canadian companies do not have to follow normal Costa Rican worker rights and environmental laws. It's another form of colonization where once again people who are "different" are taken advantage of.

Could you give me a source on that? I did some digging but all I found was further cutting of tariffs.

I'm afraid I'm less positive because I don't think we necessarily learn from historic mistakes (quickly enough). An example are those wars in the Middle East. They only benefit defense contractors, everyone else loses. Taxpayers lose because wars are expensive, citizens lose because it doesn't make 'em safer, the local population suffers casualties and in the end no one is off better than before. So you'd assume they realise this...but just look at the amount of people who cheered on those war efforts!

Propaganda is a weird thing. So is misinformation. You need to understand that when the Iraq war started a lot of things that are now common sense weren't as clear cut. The main reason all the misinformation worked is because the 'enemy' was on the other side of the globe. I doubt it would have worked as well with a close neighbor. Say, mexico or Canada.

Yes, we learn of course as we go on, but that learning process seems awfully slow to me.

The internet should accelerate it a lot.

PS: Funny how Kerbal's forum and the one on EVE ONLINE are the only game forums where you have conversations like that...I guess "space people" like to argue about "deep" subjects :P

You just need to find the right people. I've had discussions like this on the World of Warcraft forums, you just need to find the correct subsections.

Link to comment
Share on other sites

Will get back to you about the rest, about to leave the office :)

As for Costa Rica and Canada, the Canadian company involved has the audacity to SUE Costa Rica because they won't allow the construction of an open pit gold mine due to environmental laws. You'd think that's it, if a country says you can't build something destroying that country's nature, you can't do it. Sadly the trade agreement backs up that Canadian company. In short, Costa Rica will likely be bullied into paying that fine...or at least reach a sizeable settlement.

LINK

That's just one example. Check out the most recent Pacific trade agreement involving the US. They don't even negotiate publicly, they do it in hiding. Why? Because even the tiny fraction we learned about it makes it clear that the environment and small countries are being bullied on a pretty epic scale.

Link to comment
Share on other sites

What stops a sentient machine from simply building its own stuff?

Same thing that stops you from simply building your own stuff. Building stuff requires materials, infrastructure, and labour. If you don't have those at your disposal you need money to get 'em. Unless an AI was already running a vertically integrated production operation that did everything from digging up raw materials to fabricating the widgets then it wouldn't be able to simply "build it's own stuff" any more than a human could. AIs would still operate within the same economy as the rest of us, they wouldn't have access to unlimited resources.

Link to comment
Share on other sites

I am of the opinion that a true "Robot overlord" AI will see humans like humans see housecats. Aww, arnt they so adorable, look at all the silly things they're doing... and now it's demanding to be fed. Such is existance being "owned" by a human...

Link to comment
Share on other sites

I, for one, welcome our new robotic overlords!

But seriously, I was listening to either Neil DeGrasse Tyson's Startalk or the Skeptics Guide to the Universe podcast, and this question came up. It was mentioned that every generation, some author writes about a dystopian future where some new emerging technology has destroyed human civilization, and thus far it has never happened.

We'll be fine...or at least our consciousness uploaded into machines will be fine...

Link to comment
Share on other sites

Would turning them off be like putting them under general anaesthetic, or into a coma? Because if it's similar to some kind of tranquiliser, maybe only law enforcement (or qualified robotic specialists) should be allowed to do turn them off, if they judged a danger to other robots/humans.

Link to comment
Share on other sites

I speculate two things will happen:

I. Some people will create a social group against sentient beings, potentially siding with some religious groups.

II. Some countries will give sentient beings full rights.

III. The same old question makes another return: Where do we draw the line? Is a simple learning computer intelligent? How many right should one get?

IV. It will undoubtedly be controversial

V. It may likely be destroyed out of paranoia coming from movies such as Terminator. Or it may destroy us, as it will see us as a threat to it's existence as we tend to be paranoid as a whole. For example, the Cold-War paranoia the the Soviets were spying on every US civillian. I unfortunately was not around at that time and cannot possibly hope to comprehend what it was like, but you can rest assured some humans will retaliate.

Nowadays the tough questions are never yes or no, but more qualitative.

Link to comment
Share on other sites

I study machine learning, and consider myself to be fairly well versed in state of the art AI. I hope my comments will help to give you a more realistic idea of the field and where it is going. That said, I can't anticipate what people will discover, and an expert probably couldn't do it much better.

The thing that researchers strive to create - artificial intelligence - is not the same as consciousness, personhood, or sentience. There may be researchers trying to create those, but their progress is dependent on the developments in AI. It seems likely that researchers will be able to develop very strong AI, without having to include consciousness, personhood, or sentience. It does not seem like there is a threshold of intelligence that is special. There are a variety of systems of varying levels of intelligence. A system's performance on a particular task is dependent both its intelligence (the effectiveness of the algorithm it uses to learn), and the amount, quality, and variety of data to which it has been exposed. The data seems to be way more important than the algorithm in most cases, which is why even some of our current algorithms can exceed human performance given sufficient data. This is why I believe that it is not the growth of computer power or the gradual development of better algorithms that seems to be taking us closer to strong AI, but the ever increasing availability of data.

Back to the topics of consciousness, personhood, and sentience. These concepts are not well defined in the operational context of current machine learning, but there are some similar well-defined concepts in ML that could be analogous.

Some algorithms have both generative and perceptive modes. It is plausible that the intelligent, learning subsystem of the human brain can operate in generative and perceptive modes as well. what we have been calling consciousness is the state of being mostly in the perceptive mode, but more likely, we are in both modes to varying extents at all times, and that it varies between brain regions, and that what most people would recognize as consciousness does not correspond directly to any observable salient pattern of brain activity.

Personhood is a quality we grant to eachother out of respect, and we have never quite had the humanity to grant it to every other member of Homo Sapiens at any point in our history (there have always been slaves, people who are not recognized as people due to mental illness, religious status, skin color, etc.) I would like to define personhood as a subjectively applied label given to any percept that is frequently metaphorically compared to oneself. This at least nails it down precisely enough that we could say whether most machines consider us to be "qualitatively like them". There is of course a limit of how deep any metaphor goes. However, metaphor is not an explicitly modeled aspect of all artificial intelligence systems, and for systems where it is not explicitly modeled, it may not be clear what a metaphor is. Basically, it may be possible that a metaphor is only a meaningful idea when you discretize a continuous space with symbols like we do when we use language. Intelligent systems can still perceive and act intelligently without discretizing their perceptual space (other than the discretization necessary to simulate a continuous system on a digital computer to an arbitrary degree of precision)

Sentience is a less often used word, but it is often just taken to mean "as smart as a human" or in the case of animal rights activist "capable of feeling", which is very broad and most definitely includes a lot of already built computer systems, and perhaps even thermostats and automatically filling toilet bowls.

What I am interested in are the well-defined parameters of high-performing machine learning systems. I don't honestly care whether those qualities correspond to sentience, personhood, or consciousness in the vernacular vocabulary. And if there are seemingly arbitrary thresholds in those parameters which distinguish salient classes of intelligent systems as human-like, then so be it, It doesn't mean we are obligated to treat them differently, it just means that we will probably treat them like people and obsess over them and generally act like the completely self-absorbed narcissists that we are, because in studying them, we are really just studying ourselves and getting hung up on one particular type of system that is not intrinsically special and is but one of an infinite variety of fascinating systems that are intelligent, complex, adaptive, elegant, or just plain awesome.

If AI's really take of and become an integral part of our society I doubt they'll treat us as cattle. Admittedly, I'm projecting human values on entities that are decidedly non human. But I think we'd be fine simply because it is a mutually beneficial relationship. Humans provide energy and processing power for the AI's, and the AI's take care of complex information systems. Sure, either side could go without the other if they really wanted to, but why bother when it is much more convenient to cooperate?

Projecting human values on entities that are decidedly not human is an interesting way of putting it. Consider thinking of it as the values projecting themselves onto any system that will listen. If the values fail to take root in AIs, either the AIs will remain inert and useless and eventually be replaced with AIs that are more fertile ground for our values, or they will develop their own values, like we somehow did, and who knows how that will turn out. But considering that our values a pretty much the reigning champions on Earth, I highly doubt they will fail to take hold. The AI systems may turn out to be too susceptible to our values, and end up 'wasting' all their time and memory worshiping God.

Edited by nhnifong
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...