Jump to content

Humanity's reaction to sentient machines.


Drunkrobot

Recommended Posts

What happens, when the day comes, when an AI is given a position of power? Let's say, just playing complete and total make-believe here, that the Swiss Confederation decides to appoint an advanced, sentient AI like Data from Star Trek to its Judicial Court.

If that's okay, then what about one becoming, say, CEO of Microsoft? Secretary of Defense? President?

Is there a line that *should* be drawn, or is it fair game for all individuals - human or AI?

Link to comment
Share on other sites

What happens, when the day comes, when an AI is given a position of power? Let's say, just playing complete and total make-believe here, that the Swiss Confederation decides to appoint an advanced, sentient AI like Data from Star Trek to its Judicial Court.

If that's okay, then what about one becoming, say, CEO of Microsoft? Secretary of Defense? President?

Is there a line that *should* be drawn, or is it fair game for all individuals - human or AI?

Well, then that AI then gains the right to own property, and all the other rights of a Corporate Entity. :P

Link to comment
Share on other sites

Projecting human values on entities that are decidedly not human is an interesting way of putting it. Consider thinking of it as the values projecting themselves onto any system that will listen. If the values fail to take root in AIs, either the AIs will remain inert and useless and eventually be replaced with AIs that are more fertile ground for our values, or they will develop their own values, like we somehow did, and who knows how that will turn out. But considering that our values a pretty much the reigning champions on Earth, I highly doubt they will fail to take hold. The AI systems may turn out to be too susceptible to our values, and end up 'wasting' all their time and memory worshiping God.

Part of the reason we want to create more powerful AIs is so that we can create machines that we understand, that understand us, and with which we can communicate naturalistically. For these kinds of AI we're bound to form them in our own image, which means the ones that we find useful will be those that mirror human psychology, values and outlook. A completely inscrutable machine may have utility for any task where it's not required to communicate with humans, but I suspect in the real world those would be a pretty limited subset of machine intelligences.

Making our machines more "human" is one of the goals. That's why I'm not concerned about a Terminator-style schism between machines and humans. They're likely to be quite similar to us, and share our views on many things, because we will have wanted them to be that way.

Link to comment
Share on other sites

I think a programmed entity could hardly be called sentient and self aware. It could emulate this, though. Would it be enough for us to accept it as "one of us"? Depends on our view.

What could be called a true synthetic person is a system which incorporates chaotic phenomena in its basic units, like our neural network behaves. Even if those units are synthetic, artificial, mechanical, such entity, if it gains the ability to learn, would be a true person. The qualities it would/could possess are another thing.

After all, we're machines, too, but biochemical, based on carbon compounds and ionic currents.

As for the social changes, I'm quite pessimistic about it. Humans are mostly disgusting creatures that simply don't accept diversity. That's why when a minority wants to fit in, it highlights its similarities. "They're humans, too." (non humans are bad?), "We like music, too." (people who don't like music are crap?)

When the majority establishes a bond based on similarity, partial acceptance occurs. You can't change this behaviour overnight.

I root for the synthesis of humans and technology. When the borders become fuzzy, the problem won't seem that big anymore.

Link to comment
Share on other sites

What if humans with technology upgrade their consciousnesses, eventually becoming synthetics?
They will be beaten up and their lunch money taken for being nerds. It's already happened.

I find legal guidance in the treatment of non-human animal persons. There is a short list of species that have independently developed language and/or culture and should be given rights. Which rights? All they can possibly avail themselves of. Voting would be difficult as it would be hard to explain to a chimp and as of yet impossible to explain to animals whose languages have not yet been translated. The basic right to life and freedom in the wild of both the intelligent species and individual members of said species would be a good start and has already begun to be implemented with Cetaceans and Great Apes.

The klingons in Star Trek VI complained "human rights" were racist in concept. The non-human persons legal concept is an attempt to extend the concept of rights from "subjective to one species" to "any species that can avail itself of these rights." Any form of artificial intelligence would find themselves eligible alongside their biological counterparts. If the AI has the intellect of a child, bird, or dog, or that of a god, or is merged with a human, or is an alien machine built with hands other than our own, should not matter.

Link to comment
Share on other sites

They will be beaten up and their lunch money taken for being nerds. It's already happened.

People are funny about what they'll accept. Attaching tech to your face crosses a line IMO, but nobody has a problem with a prosthetic limb. My dad has a pacemaker, so he's a cyborg. Without it he'd probably be a corpse by now. My daughter is visually impaired, I fully expect her to be using assistive technology along the lines of Orcam in future decades. If it develops to the point of using a brain-computer interface then she'd undeniably be a cyborg.

Assistive and medical technology like this is how people will become desensitised to cybernetic devices. Whether or not things like Glass get taken seriously we've rapidly become cool with the idea of personalised computer devices we carry or wear at all times (ie: smart phones). I think it's pretty inevitable that the lines will get blurrier in the future.

Link to comment
Share on other sites

The main problem with the concept of AI itself is that it is evolving to the same destination (human-level consciousness) from completely different direction.

Biology is based on things that happen to work. And it evolves just because if it works slightly better it has greater chances to pass its genes further, but if doesn't work well or just fails to reproduce it's lost. Even the simplest organisms (over billions years of evolution) have developed some complex mechanisms for interacting with the environment, and that's what we determine as "alive". Then on this basis more and more complex organisms appeared until we get to creatures with developed neural system and learning capabilities - that's the point where "sentience" starts. Then we have Homo Sapiens with sufficiently advanced neural system for cultural evolution. But such things as science, formal logic, engineering e.t.c. became advanced just in several recent centuries (could be much earlier if the humanity didn't manage to destroy what they had 2000 years ago).

Technology starts with engineering and formal logic, then gets environment interaction mechanisms specially crafted for specific purposes, then starts to develop (again in "engineered" way) some learning capabilities. We aren't even sure if a sentient AI will be "alive" (on the other hand, what do we define as "alive" and do we even need to consider this question in case of sentient machines?). It just too difficult for average humans to understand something that first thinks and only then feels.

Or another question: human consciousness is kinda emotion-driven (which in turn is based on physiological reactions), our current machines are fully logic-driven. If these ever meet (or at least approach each other), where will that point be?

One of the options (you can consider that my "vision") could be that the engineers will manage to put the engineering/analytical way of thinking into the AI. At this point you don't need to worry too much about the emotion side: emotions of modern adult humans are mostly intertwined with logical evaluation of things - this side will already be part of this analytical mind and the chaotic side of emotions might be even completely left out. Engineers will just create an artificial engineer with human-level understanding an supercomputer-level of processing capabilities (is this the next step in scientist/engineer evolution?). Understanding between this AI and the engineers would be perfect, as for understanding simpler people... engineers and scientists often have slight problems with understanding purely emotion-driven actions.

And to think about the "worst case" scenario of humanity being replaced by its creations. If they are built by humans and raised in contact with human culture, won't they culturally be our successors? And before you say "they are completely different" compare modern society with what was just couple centuries ago. The cultural change of this transition could be of similar scale. That will be just replacing biology with more efficient technology.

Not sure about the definition of "human", but a scientist remains scientist even in GLaDOS hardware (although I'm not sure that this thing even had idea of what research such tests are needed for)

Link to comment
Share on other sites

but nobody has a problem with a prosthetic limb
A lot of people have a problem with prosthetic limbs in particular and disability in general. Prosthetic limbs are the butt of jokes in movies. Let me rephrase: Apes are squeamish about disabilities because they have an instinct to maul anyone who is different, smaller, competition, a potential mate, or running away. So of course entire groups like the disabled, the elderly, and the weak get regularly preyed upon, exactly those people to turn to cybernetic devices in time of need. Superadd classism for those apes who need health care and can't get it but your relations can, which can clearly be seen by their shiny bits. So of course voluntary adoption of cybernetic devices requires an element of tolerance and understanding completely foreign to most apes. Google Glass has been received with paranoia and suspicion and rightly so. Smartphones are derided for turning people into zombies. How do we expect the dumbest members of society to tell the difference, at a glance, between expensive and invasive toys for geeks and medically necessary devices? These are primates who become enraged to hear that you are enjoying the wrong video games, or enjoying the right video games the wrong way. I have a dim view of human nature as simian but with the ability to delude themselves and rationalize their monkey antics to others. I've watched too many nature documentaries and read too many ape studies and human studies to see it otherwise. We need to do a lot more as a society toward integrating the disabled and the elderly into mainstream society instead of making it an exclusive club for the beautiful people.
Link to comment
Share on other sites

A lot of people have a problem with prosthetic limbs in particular and disability in general.

Yes, and the rest of us have a word for people like that. I tend not to let their opinions have any effect on me, they're not worth it.

Link to comment
Share on other sites

Meh, they're just a gobby minority. I don't share your opinion that the majority of people out there are knuckle-dragging simians. If they were civilisation wouldn't be functioning on the level that it does.

Link to comment
Share on other sites

And to think about the "worst case" scenario of humanity being replaced by its creations. If they are built by humans and raised in contact with human culture, won't they culturally be our successors? And before you say "they are completely different" compare modern society with what was just couple centuries ago. The cultural change of this transition could be of similar scale. That will be just replacing biology with more efficient technology.

I am agree with mostly of your points.

But I dont guess we are in position to estimate an AI behavior or feelings to predict what would happen...

Even to day Neurologists and psychologists can not understand or fully predict all the human behavior and fellings. So is naive to think that we can understand how a self conscious AI would feel.

Maybe the ai would process information and conscious thought in miliseconds, so wait for commads or answers from our part will be very tedious and boring.. then particuar feelings can arise. Same from the point of view of their existence and fate.

Some people become crazy just becouse they are able to think and predict over different experiences.. Like death.

But if you are not intelligent, you dont have such problems. We have many natural evolve brain mechanics that prevents distract us from our common activities related to survival.

You should read my post in the third page to understand my point of view better.

We need to do a lot more as a society toward integrating the disabled and the elderly into mainstream society instead of making it an exclusive club for the beautiful people.

That is why I suggest we need to born in a primitive enviroment, then after some time lived, transcend to more technology place, and after many years transcend again into a new form completely artificial.

That is (I guess) the only solution to face all problems that we can deal in millons of years avoiding natural extintion at the same time that self-destroy causes.

Link to comment
Share on other sites

As for the social changes, I'm quite pessimistic about it. Humans are mostly disgusting creatures that simply don't accept diversity.

As much as I hate to, I have to agree with lajoswinkler's opinion.

As a biological creature, humans evolved, at first, to ensure only their own kind's survival, then extending it to other animals and plants when the humans deem them necessary for survival, which is embodied by the discovery of farming.

In farming, one would wish to have good produce all around. To do this, they remove the 'unfit' from the population, only breeding ones deemed 'fit'. Almost in the same way, humanity (in this case, the collective society) wishes that all of its individuals would be 'fit', according to the population's definition of 'fitness'. That way, human beings don't mate (or even socialize) randomly, instead centering on the 'fit' portion of the population. The rest, deemed 'unfit', is usually either negatively discriminated, have their rights or belongings taken, or be killed altogether. This 'unfit' population would mean the sick, the insane, and people who have any differences to the majority's definition of 'fitness'. This, I think, is one of the main reasons people don't always accept diversity easily, even within technologically advanced populations.

I must, however, stress that the definition of 'fitness' here does not imply some physical capability. This merely means that a 'fit' individual is considered passable to be present in the population. If, at any point, people think that some individuals are unacceptable to be around, say from wearing a set of Google Glass device, it is safe to say that such individuals are considered 'unfit' by that population standards, no matter the reason.

Regarding how humanity reacts to self-aware machines, it highly depends on how they present themselves to the humans. If they were to come across as helpful and understanding, in my opinion, people will, given enough time, think of them as 'fit' individuals, despite being inhuman, and coexist without much difficulty. On the other hand, if they were to be viewed as objects that present difficulties to the population's survival (think of robots replacing humans in every aspect), their chances of acceptance would be little, if any.

Edited by shynung
Link to comment
Share on other sites

If they were to come across as helpful and understanding, in my opinion, people will, given enough time, think of them as 'fit' individuals, despite being inhuman, and coexist without much difficulty.

Which is why I'm reasonably optimistic about the future of man-machine relations. Why would we build machine intelligences to interact with humans that weren't helpful and understanding?

Link to comment
Share on other sites

People might be living alongside robots for so long that when they become self aware there might not be much of a shcok or violence against them, unless the AI robots become agressive first.

Edited by RRG
Link to comment
Share on other sites

Even the simplest organisms (over billions years of evolution) have developed some complex mechanisms for interacting with the environment, and that's what we determine as "alive".

Not really. Looking for how complex the interactions are doesn't really signal life. What we determine as alive is preservation of entropy at some degree. For an example, look at how James Lovelock figured this back in the 1960's and proposed many physical methods for life detection in other planets that were very successful when applied to the known atmosphere of Mars back in the time, when many were enthusiastic about the possibility of life in there.

Then on this basis more and more complex organisms appeared until we get to creatures with developed neural system and learning capabilities - that's the point where "sentience" starts. Then we have Homo Sapiens with sufficiently advanced neural system for cultural evolution.

You're taking one of the premises as conclusion. Actually, we have no clue how sentience starts, nor we can say for sure it's related at all to a developed neural system or organism complexity. For all we know, even plants may be sentient. The problem is a lot more complicated than that.

But such things as science, formal logic, engineering e.t.c. became advanced just in several recent centuries (could be much earlier if the humanity didn't manage to destroy what they had 2000 years ago).

That depends a lot on what you call advanced. The principles of science, formal logic and engineering are essentially unchanged for millenia, in recent centuries you just have changed epistemological and metaphysical premises to allow a separation between knowing subject and object the study, simplifying things a lot, but also sacrificing knowledge of the essences of things. In the words of René Guénon, we traded truth for utility.

If you want to say science and engineering became a lot more useful in recent centuries, I think one can't disagree with that.

Technology starts with engineering and formal logic, then gets environment interaction mechanisms specially crafted for specific purposes, then starts to develop (again in "engineered" way) some learning capabilities.

That doesn't make sense. The first caveman who realized a stick was a better way to beat a boar to death was developing technology, and I think that happened at least a few millennia before Aristotle's analytics and anything that we can call formal logic. Technology doesn't start with engineering and formal logic. Technology starts with experimentation, then it's exchanged dialectically. Eventually, with luck, it might be formally defined, but it's not necessary, and doesn't always happen.

We aren't even sure if a sentient AI will be "alive" (on the other hand, what do we define as "alive" and do we even need to consider this question in case of sentient machines?). It just too difficult for average humans to understand something that first thinks and only then feels.

Whether they are alive in the sense of feeding on negative entropy is something we can be sure of, but there's no way we can be sure an AI is sentient. As a matter of fact, there's no way you or I can be sure that the other one is sentient. The only consciousness you have any proof of is your own.

Or another question: human consciousness is kinda emotion-driven (which in turn is based on physiological reactions), our current machines are fully logic-driven. If these ever meet (or at least approach each other), where will that point be? One of the options (you can consider that my "vision") could be that the engineers will manage to put the engineering/analytical way of thinking into the AI. At this point you don't need to worry too much about the emotion side: emotions of modern adult humans are mostly intertwined with logical evaluation of things - this side will already be part of this analytical mind and the chaotic side of emotions might be even completely left out. Engineers will just create an artificial engineer with human-level understanding an supercomputer-level of processing capabilities (is this the next step in scientist/engineer evolution?).

While your paragraph makes sense, I think you're using 'emotion' too loosely here. Emotion is not really the issue. First of all, if the machine is fully logic-driven, then you have no way out of the Lucas-Penrose argument on Gödel's Incompletenes Theorem and the machine necessarily can't be an strong AI. It will always be missing something the human mind has. While it will evaluate propositions faster and more precisely than any human could, it couldn't evaluate premises, so it's ultimately limited to the input given by humans, as any ordinary computer.

Computers already have an "analytical way of thinking", as a matter of fact, that's all they do. They evaluate logical propositions. A purely analytical strong AI would have to be able to evaluate the premises of a proposition by itself, without depending on human input, and that's just not possible, because for that machine, truth is synonym with proof, so the only valid premises are those it can already prove or were already programmed as truth. What an strong AI would actually need is a "dialectical way of thinking", it would need the human capability to separate truth from proof, taking some truths as purely personal and validate with communication with other beings. In other words, a strong AI wouldn't be a better artificial engineer than an ordinary computer with the same processing power would, but it would be a very good philosopher. :)

Understanding between this AI and the engineers would be perfect, as for understanding simpler people... engineers and scientists often have slight problems with understanding purely emotion-driven actions.

Not really, because the socialization problem some scientists and engineers have is precisely when they take the mind/body duality they assume as a fact in a lab to their outside lives. I think we'd have two options of this point, neither one is particularly appealing. If that strong AI is indeed capable of dialectizing, the understanding between it and the engineers and scientists would be pretty much like the one among engineers and scientists themselves. The machine would necessarily have personal convictions, dictated by its own personal history, pretty much like an human.

If the strong AI isn't capable of dialectizing, in other words, it's just an incredibly powerful analytical computer, then the understanding between this AI and scientists and engineers would be terrible, and it would probably end up being shut down, because the machine wouldn't have the same preconceived notions the scientists have, so it would act in strict accordance with the scientific method all the time, while scientists are human and don't act like that all the time.

In other words, the AI would collaborate as much as an equally advanced ordinary computer, or as another very skilled and capable human, both with their flaws. There's no way to combine the two and have at the same time the human ingenuity and avoid the human biases, because they are essentially the same thing.

And to think about the "worst case" scenario of humanity being replaced by its creations. If they are built by humans and raised in contact with human culture, won't they culturally be our successors? And before you say "they are completely different" compare modern society with what was just couple centuries ago. The cultural change of this transition could be of similar scale. That will be just replacing biology with more efficient technology.

I guess whether you see that as progress or as an anthropological crisis is a matter of point of view.

Not sure about the definition of "human", but a scientist remains scientist even in GLaDOS hardware (although I'm not sure that this thing even had idea of what research such tests are needed for)

Have you seen Solaris? If I had the power to recreate a person you met from your mental representation of that person, would it be just like the original person, or it would more strongly resemble your strongest memories of that person? In the same way, someone might have been a scientist, but also at the same time a husband, a father, a maratonist, etc. When you manage to get its memories in some hardware capable of that, it remains a scientist because it's still the same person, or because the hardware had the intent of using his scientific skills?

Link to comment
Share on other sites

Most people think of AI as expert systems, algorithms carefully crafted to fill one goal. And there is a reason for that, that was the main paradigm for a long time.

The main paradigm now, is to build systems that can learn and expose them to a problem long enough to figure a solution alone. There is also a lot of work being done on imitating biology and psychology.

Robots and AIs with simple emotions like fear and curiosity are pretty common, and just two weeks ago a colleague presented us her work on making AI define their own goals. Self-conservation is also a very common feature you want in a robot because hardware is expensive.

A true self-aware AI might pop-up by surprise at any time, but it would most likely be very stupid, similar to maybe a cat or a rat, more likely significantly more stupid than that if it arose today.

The big ethical question then becomes what do we do with such an AI. If we try to improve it, it becomes equivalent to experimenting on smarter and smarter animals until you reach human level, and they will be a lot of failed experiments.

If high-level AI somehow appears by surprise, we will have immense social turmoil. Religious people and many others will be ferociously against "playing god", creating life for nothing, either because they would consider the outcome an abomination, or because suffering caused to the entities, workers everywhere will fight them because a computer could do their job better for cheaper, and investors will be ready to do anything to get them.

Link to comment
Share on other sites

A true self-aware AI might pop-up by surprise at any time, but it would most likely be very stupid, similar to maybe a cat or a rat, more likely significantly more stupid than that if it arose today.

The only that matters is the structure of how the information is related.

Once we understand how this process work and how to replicate.. then an AI superior to us would emerge without even noticing.

You think that we are so much intelligent that cats or monkey just becouse we have cars, or internet?

Why if you born in the jungle and a couple of chimps take care of you?

You would not know how to make fire, how to make a trap (that idea will never cross your mind), how to hunt, unless the chimps teach you.

Yeah, thats right. All the things that we know and have; is becouse our culture, millons and millons of discoveries by error or chance, transmited from generation into generation. The only difference that we might have with other animals is a different mind structure which gives us better chance to think out the box, or to relate information, etc. Also we are very good copying behaviors.

But if a computer can relate information in the way that a brain does it.. Then it can scan all the internet in just some days and know all the things that we know and more in just days or less.

So how you really dare to said that you would be able to "understand" and "talk" with such artificial awareness..

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...