Jump to content

Should sentient AIs be allowed to take control of something important?


gmpd2000

Recommended Posts

I do understand; I understand you've no idea of the nature or mechanism of evolution except through bad scifi featuring 'the next stage of human evolution'.

I used the word evolution about AI, the same way computers and cars "evolve", yes its an totally different process: product development over time as you improve and learn by mistakes.

Next major change in humans will be the same way, by genetic engineering, human evolution the last 10.000 years has made us more resistance to diseases who it have become more of as we became numerous and live close, and some other stuff like adults can drink milk.

Without product improvement and bug fixes we would see the same rate of change the next 10.000 year :)

And yes the evolve to the next stage is so stupid I tend to overlook the spaceship behavior.

Link to comment
Share on other sites

Peace, people :) Many mental illnesses that turn Homo sapiens into various "-paths" can be traced to chemical imbalances, genetic diseases, brain defects and so on. We have to treat such things, or at least mitigate them with drugs, behaviour modifications or isolation. I find it hard to imagine our first AIs will be installed on faulty hardware without strict quality control.

The debugging process, on the other hand, is likely going to be something AI Psych textbooks are written about.

Link to comment
Share on other sites

If an AI goes rogue it will be due to human error.

Computers do what we tell them, and nothing more.

But sentient AIs should only be used where sentience is an advantage. Why does spaceship management require sentience? Why can't it be done by a non-sentient program?

Link to comment
Share on other sites

I do understand; I understand you've no idea of the nature or mechanism of evolution except through bad scifi featuring 'the next stage of human evolution'.

Even if it's like that, who are you to judge people here? Seriously, every time i see someone provoking and starting a fight here one name shows up, that's yours. This forum is about discussing various topics not attacking someone, if you are not up for this then just ignore the people that in your opinion have no idea of the world. The only 2 statements you brought into this discussion so far are that i am talking gibberish and that i have no idea about nature or the mechanism of evolution. If you are so smart why don't you enlighten us with you own opinions about the OP ???

Link to comment
Share on other sites

So, do you think that AIs should take control of something important (space mission, managing a space station, etc) ?

Why not? We already trust computers with live or death decisions now, so a mechanism that can deliberate with a little more nuance should only make things better. The big question in my mind is whether you should grant it any rights, as disconnecting or destroying sentient intelligence is not the same as scrapping your TV.

Link to comment
Share on other sites

There aren't many important things we need sentient AI's for.

For example, if you want an AI to fly a super-ultra bird plane, do you really need to be able to talk to the AI for it?

or do you just need to mess around with the coding inside using a monitor, so that it behaves as desired?

Link to comment
Share on other sites

Indeed we should. For a couple reasons really. Firstly something like an AI could be much better suited for tasks like say, piloting a starship while the human crew sleeps away than having a live awake human.

Secondly, if we are talking about true AI in the sense of fully independent thinking creature. Then we definitely want to do that. One big thing to do with future AI's is that if their psychology ends up resembling humans at all (likely given that we only have human psychology to base it off of) the AI's will slowly build up resentment over being kept from important jobs, only kept for 2nd class citizen type work. Etc. The real trick to preventing a Skynet is to make AI's WANT to keep humans around.

For an interstellar mission, there isn't a viable alternative to a sentient AI. There's no way non-sentient computer programs would be flexible enough to conduct a comprehensive exploration about an alien planet we initially know very little about with no human input, and waiting 8+ years for a reply from mission control if there's a problem isn't feasible.

Link to comment
Share on other sites

For example, if you want an AI to fly a super-ultra bird plane, do you really need to be able to talk to the AI for it?

Who says sentient means talking? I can imagine a lot of uses for safety mechanisms, where those need to make the right call in complex situations where it is one evil versus the other. Maybe even weapons systems. Check out the work of Philippa Foot and those who followed for examples.

Link to comment
Share on other sites

No. Unless you subscribe to vitalism, why would it be?

Is it anything more than theoretical yet?

I think the question of whether any animal other than human is "sentient" is still controversial, though I'd lean toward "Yes, many nonhuman animals are sentient" myself.

I guess it ultimately depends on how you define "sentient" and "artificial intelligence."

There are those experts in psychology who seem to insist that the minimum threshold for "sentience" is human psychology. That excludes lots of other animals that I suspect are smarter and more adaptable than the most adaptable computer programs ever dreamed of much less created. I don't count myself among them, but I also don't pretend that I can fully refute or rebutt their arguments.

In sum, I didn't ask the question because I figure I got the answer to it. I asked it because it is a good question to discuss.

Edited by Diche Bach
Link to comment
Share on other sites

If an AI goes rogue it will be due to human error.

Computers do what we tell them, and nothing more.

But sentient AIs should only be used where sentience is an advantage. Why does spaceship management require sentience? Why can't it be done by a non-sentient program?

I tend to agree with your first statement.

I disagree with your second. If only because of its absolutist wording. There are plenty of instances of computers doing things they are not told to do. Generally speaking these are things like a random electron from the environment flipping a bit in the computer. Several years ago there was this problem in some cars (Toyotas I think) where the beta radiation off of cosmic rays (cosmic ray hits some air, some air shoots off as a charged particle, etc) were hitting the computers and flipping some bits to make the computer think that the accelerator had been floored, or that the brakes had been applied.

Spaceship management near a controlling source (say Mars-Earth) it isn't super necessary. When the ship is out at Alpha Centauri and we still haven't figured out FTL comms yet, then you want something there that can get 'creative' when it comes to solving problems that can come up rather than waiting 8 years for the message of "oops" to get out to NASA and a reply to show up.

Link to comment
Share on other sites

"Sentient" only means capable of responding to external stimuli; feeling. So in a sense, any machine that responds to external stimulus is already sentient, since its behavior is modified by external effects.

In fact, just about anything can be viewed as being sentient to some degree, as long as it's a system that responds to external stimuli.

When considering Earth lifeforms, the most sentient beings will tend to be those with the most complex brains; humans probably being the most sentient beings on Earth, but not necessarily so, as it depends on how sentience is defined. Remember though that we are animals, and responding to external stimuli is exceedingly important for all creatures. For machines, the relationship between "brain" complexity and sentience is not necessarily related. As an example, imagine an exceedingly fast supercomputer that performs even more calculations per second than the human brain, but is ONLY programmed to calculate digits of pi. It doesn't respond to anything at all other than running through an endless numerical expansion.

So ANYWAY, I believe that the word people are looking for here is SAPIENT. Sapient means intelligent, smart. Humans are definitely the most sapient animals on Earth, though I would hesitate before I would call the the ONLY sapient animals; the other great apes besides humans, and dolphins, elephants, and even some birds have been shown to be extremely intelligent and aware. Sapient beings would also tend to be highly sentient beings as well.

In answer to the original question, of course sapient machines should be allowed to make critical decisions if that machine has been shown to be trustworthy at the task. After all, unless the laws of physics are being constantly violated inside our heads (of which there is no evidence at all), we are simply chemical-electrical machines ourselves. There would be no reason why a sufficiently powerful computer (none of which has been built yet) running the right software could not be as sapient and sentient, or even MUCH more so, than we are ourselves.

Oh and the idea that machines don't do anything that they aren't programmed to do is very wrong; we already use evolutionary algorithms to come up with things that no human would ever think of. Technically, randomization even counts. And again, if machines are only capable of "doing what they are programmed to do", then the same must apply to human beings, unless the laws of physics break down inside our heads.

Finally, I do take exception at the term "AI". I believe that "AI" should be reserved for simplistic systems that merely mimic the appearance of being intelligent- take "Siri" as an example. A truly sapient machine should NOT be referred to as an "AI", since there is nothing artificial at all about its intelligence; its brain is artificial, not its intellect! A better term is "machine intelligence", or "MI".

Edited by |Velocity|
Link to comment
Share on other sites

I tend to agree with your first statement.

I disagree with your second. If only because of its absolutist wording. There are plenty of instances of computers doing things they are not told to do. Generally speaking these are things like a random electron from the environment flipping a bit in the computer. Several years ago there was this problem in some cars (Toyotas I think) where the beta radiation off of cosmic rays (cosmic ray hits some air, some air shoots off as a charged particle, etc) were hitting the computers and flipping some bits to make the computer think that the accelerator had been floored, or that the brakes had been applied.

Spaceship management near a controlling source (say Mars-Earth) it isn't super necessary. When the ship is out at Alpha Centauri and we still haven't figured out FTL comms yet, then you want something there that can get 'creative' when it comes to solving problems that can come up rather than waiting 8 years for the message of "oops" to get out to NASA and a reply to show up.

Most computer doing stuff they are not supposed to is software bugs, an variation is unexpected input or settings who generates features. Often they do that they are supposed to do if they run into situations who makes no sense for them, shut down everything.

And yes even on mars an intelligent robot would speed stuff up a lot. Interstellar its pretty required.

Link to comment
Share on other sites

Most computer doing stuff they are not supposed to is software bugs, an variation is unexpected input or settings who generates features. Often they do that they are supposed to do if they run into situations who makes no sense for them, shut down everything.

And yes even on mars an intelligent robot would speed stuff up a lot. Interstellar its pretty required.

My point was primarily that you cannot have 100% faith in machines working perfectly all the time every time. I would love it if we could, but we cannot.

Velocity made a good point that I should have remembered, it is quite possible to set up systems with things like genetic algorithms and neural nets that can grow in unexpected ways. There is a very entertaining program whose name escapes me at the moment that is a great example of this. You give the system something of an initial starting condition 3 wheels, some angles and distances that make up the triangles that make up the vehicle. When you hit start, your involvement is done. The vehicle will advance along a procedurally generated track until such a time as it can no longer advance forward for some reason. If this was more successful than last time, it exacerbates features about it, maybe adding a fourth wheel, maybe another triangle, etc. It keeps going, trying new things and combining what worked with other things that worked. There is no way to predict what it is going to come up with after 30 iterations of running.

And yes, an AI running on a Mars rover would be great, but it is not as necessary as on an interstellar was my point.

Link to comment
Share on other sites

Velocity made a good point that I should have remembered, it is quite possible to set up systems with things like genetic algorithms and neural nets that can grow in unexpected ways. There is a very entertaining program whose name escapes me at the moment that is a great example of this. You give the system something of an initial starting condition 3 wheels, some angles and distances that make up the triangles that make up the vehicle. When you hit start, your involvement is done. The vehicle will advance along a procedurally generated track until such a time as it can no longer advance forward for some reason. If this was more successful than last time, it exacerbates features about it, maybe adding a fourth wheel, maybe another triangle, etc. It keeps going, trying new things and combining what worked with other things that worked. There is no way to predict what it is going to come up with after 30 iterations of running.

You're thinking of boxcar evolution, I think. http://boxcar2d.com/

We already procedurally generate the layout of very complex integrated circuits. I have heard that no one actually lays out the location of every single transistor, there is software that follows design rules (possibly even some evolutionary algorithms) to design our most complicated ICs. There are now over a billion transistors, after all.

So the idea that we can never create something because we're not smart enough to understand what we're building is a fallacy, already shown to be false because no one person fully understands how even our current generations of integrated circuits work. What is important instead is that we create good design rules and algorithms for the software that generates the circuit layout. This DOES require an understanding of general principles, which we probably do not yet have in the case of intelligent machines. Perhaps all we need though is the right evolutionary algorithm and only limited understanding of how consciousness works.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...