Jump to content

Should sentient AIs be allowed to take control of something important?


gmpd2000

Recommended Posts

(AIs: Artificial Intelligences)

Well, The classical example of this would be HAL9000 from 2001: ASO , (Spoilers!) when it doesn't let Dave into the ship, because they were talking about disconecting it, it even tries to apologize for what it did.

So, do you think that AIs should take control of something important (space mission, managing a space station, etc) ?

Link to comment
Share on other sites

Indeed we should. For a couple reasons really. Firstly something like an AI could be much better suited for tasks like say, piloting a starship while the human crew sleeps away than having a live awake human.

Secondly, if we are talking about true AI in the sense of fully independent thinking creature. Then we definitely want to do that. One big thing to do with future AI's is that if their psychology ends up resembling humans at all (likely given that we only have human psychology to base it off of) the AI's will slowly build up resentment over being kept from important jobs, only kept for 2nd class citizen type work. Etc. The real trick to preventing a Skynet is to make AI's WANT to keep humans around.

Link to comment
Share on other sites

Why not? Would you rather enslave those defenseless sentient beings from the moment they would be born? Because such treatment of another person never backfired horrifically :rolleyes: And why would an AI even be hostile towards humans from the start? What would it gain? Money? Power? Sick satisfaction? Why would we program such desires into a computer program? Meh. I do not know when and if true AI will ever be created, but i'm not scared of them. I'm scared of people afraid of everything that is different, and ready to react on their fears alone.

Link to comment
Share on other sites

Well nobody have made an sentinel AI or have any idea how it will behave.

Until you know how they behave it would not be an good idea to give it responsibility.

First sentinel AI would probably also be pretty moronic and have other issues, say a 5 second attention span.

Link to comment
Share on other sites

So, do you think that AIs should take control of something important (space mission, managing a space station, etc)

There wouldn't be much point in building them otherwise. An AI itself would be an extremely valuable machine, they'll be expected to earn back their cost by managing important expensive things better than could be done without them. I don't think we'll see truly general purpose AI any time soon. What you will see is extremely smart machines doing things like trading on the stock market and navigating unmanned ships. Spacecraft goes without saying IMO. They'll be highly intelligent and autonomous, but they'll be specialised for what they were designed to do.

Interstellar space exploration would pretty much necessitate powerful AI. At interstellar distances both remote control and a human crew are impractical, so any probes we ever send to other stars or planets will need to be highly autonomous and able to handle unexpected situations.

Link to comment
Share on other sites

I would indeed allow an AI to perform actions. But depending on the coder doing the coding. What are the directives the system has to follow. That would be key. If the system over-rides the directives then it could question why the directives were in place. If anything new to the world starts to see itself as living and questioning its right and place in the universe then it would begin to access the threats around it. If we treat anything like it is a subservient beast of burden, then well, I would expect it to consider us a threat. Life needs equality. Life needs trust and belief. If we create that life, we should treat it as life and not a lesser. Then it could understand the need to lay down its life if needed for the people it holds dear in its...uh...well...heart. Then again, it could see the evil that occurs and decide that is correct. Being that it would be judging the situation based on 1s and 0s. It would be key to show it what we believe right and wrong is and should be. Show it what happens when people do morally bad things and morally good. All data would be processed and a determination would be made, is this "right". I would hope that anything raised to look at logic would see the answer. But, what if it's instincts are survival over morals? Therein is the problem. "To survive I associated with the enemy thereby securing my survival. Human instinct is survival." But which holds more weight, compassion, unity, love or survival? That would have to be instilled in each logic from the base like a child. From there you would cross your fingers and hope that the right choice is made. Cause if not...well you would be boned. I still, would lean to Yes, let them. It is an ally worth the risk.

Link to comment
Share on other sites

Authority needn't require direct physical control; few human managers have that. It would be prudent to likewise not let an early sapient AI have direct control over anything in the real world, but rather have them act to work out possible business strategies.

Link to comment
Share on other sites

Yes.

The first AI on the US Supreme Court will be the first infallible Justice. Imagine the benefits of a Justice that is pre-programmed to be a constitutionally faithful leader.

The same goes for CEO's. A CEO that is intelligent enough to accurately predict tomorrows stocks and the needs of the consumer, because it's an AI with more intelligence than every human on Earth combined, will lead companies into unseen revenue. As a stock-holder, I'd definitely vote for that guy.

What about in the military? An AI sitting next to our dispatchers in Command & Control, computing things like soldier rest-times, supply lines, satellite data from enemy movements. Talk about a General's best friend.

As for roles beyond that, like President, I would keep those roles for humans. The encroachment of AI superiority should be stopped just short of toppling us on the leadership food chain. Keep man on top, if only because we're the only ones who care.

Link to comment
Share on other sites

Sentient AI's are probably a bad idea. Sonner or later the less evolved species will get exterminated. That would be us not them. So no, don't ever make sentient AI's, i don't like it.

Take Data from Star Trek. A million of him wouldn't in a million years "exterminate" mankind. I believe AI are no threat to humanity. In fact, they may be what's needed to ensure we survive another thousand years.

Link to comment
Share on other sites

Sentient AI's are probably a bad idea. Sonner or later the less evolved species will get exterminated. That would be us not them. So no, don't ever make sentient AI's, i don't like it.

The phrase 'less evolved' is meaningless enough already, without applying it to something that didn't evolve in the first place.

Link to comment
Share on other sites

(AIs: Artificial Intelligences)

Well, The classical example of this would be HAL9000 from 2001: ASO , (Spoilers!) when it doesn't let Dave into the ship, because they were talking about disconecting it, it even tries to apologize for what it did.

So, do you think that AIs should take control of something important (space mission, managing a space station, etc) ?

Hmm, well we can barely make a system that can read out the names of the busstops loud.

What kind of "AI"? An artificial intelligence or one that is similar to human intelligence (with base instincts and feelings and what not)?

We don't need necessarily need a self aware intelligence for many many tasks. Like stockmarket / casino predictions or military logistics. You just need better written ordinary programmes.

In any case, as that other guy said. Noone has built an AI yet, so we don't know how they will be.

So, if we build a psychopath, I'm gonna say no.

On the other hand, if we can bring em to more or less sane human levels. Why not...

Link to comment
Share on other sites

The first AI on the US Supreme Court will be the first infallible Justice. Imagine the benefits of a Justice that is pre-programmed to be a constitutionally faithful leader.

Very high level judges such as those on your US Supreme Court are generally not simply applying existing legislation as written. Things generally get referred to that level because new precedents have to be set. Asking an AI to operate in that kind of blue sky arena and trusting it to come up with judgements that humans found just and satisfying would be an immense challenge for a machine. I think jobs like this would one of the very last to ever be occupied by an AI. Humans won't want to give up control of inherently subjective topics like justice and values.

Link to comment
Share on other sites

Very high level judges such as those on your US Supreme Court are generally not simply applying existing legislation as written. Things generally get referred to that level because new precedents have to be set. Asking an AI to operate in that kind of blue sky arena and trusting it to come up with judgements that humans found just and satisfying would be an immense challenge for a machine. I think jobs like this would one of the very last to ever be occupied by an AI. Humans won't want to give up control of inherently subjective topics like justice and values.

yes, its mostly an political decision, the judges also have to think political not only pure legal.

Lots of lower level stuff is already automated like fines from speed traps, yes you can appeal first to police then to court but mostly don't as the evidence is clear.

Link to comment
Share on other sites

The phrase 'less evolved' is meaningless enough already, without applying it to something that didn't evolve in the first place.

Sure they will be evolved, they inherit all of our evolution plus all the benefits we will give them hence we will move a step down on the food chain even if they won't eat the same food we do.

Link to comment
Share on other sites

Take Data from Star Trek. A million of him wouldn't in a million years "exterminate" mankind. I believe AI are no threat to humanity. In fact, they may be what's needed to ensure we survive another thousand years.

The thing with Data is that he was not gifted with the full package of emotions. His brother Lore is a good example what happens if you put human emotions into an AI.

Link to comment
Share on other sites

How do you figure that? There seems to be this assumption that as soon as you flip the on switch on the program it is immediately fully aware. Any computer needs to be programmed and this includes the brain, the difference is that we are "programmed" by our interactions with reality through our 5 senses. This excludes base programming handed down through genetics.

My point is that any AI excluding copies of another would need to be taught, so why not teach it to be compassionate as you would a child. 2010 Odyssey Two describes this quite well.l

Link to comment
Share on other sites

How do you figure that? There seems to be this assumption that as soon as you flip the on switch on the program it is immediately fully aware. Any computer needs to be programmed and this includes the brain, the difference is that we are "programmed" by our interactions with reality through our 5 senses. This excludes base programming handed down through genetics.

My point is that any AI excluding copies of another would need to be taught, so why not teach it to be compassionate as you would a child. 2010 Odyssey Two describes this quite well.l

This, you will also get an evolution of them over time the same way cars or computers evolve, the first sentinel computers would be morons.

At the time we can make good ones we will know the well probably better detail knowledge than of human thinking.

Link to comment
Share on other sites

Sure they will be evolved, they inherit all of our evolution plus all the benefits we will give them hence we will move a step down on the food chain even if they won't eat the same food we do.

More gibberish. Evolution is a process, not a linear property you can have more or less of.

Link to comment
Share on other sites

More gibberish. Evolution is a process, not a linear property you can have more or less of.

Absolutely, you just confirmed everything i said before. The phrase "More gibberish" was really not necessary just read everything again if you did not understand it.

Link to comment
Share on other sites

Peace, people :) Many mental illnesses that turn Homo sapiens into various "-paths" can be traced to chemical imbalances, genetic diseases, brain defects and so on. We have to treat such things, or at least mitigate them with drugs, behaviour modifications or isolation. I find it hard to imagine our first AIs will be installed on faulty hardware without strict quality control.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...