Jump to content

What exactly is sentience?


Recommended Posts

So I was reading an interesting article in Wired magazine today. It was about a guy who hopes to build a complete, working model of a human brain in a computer. He claims that, if it works, it will be a fully intelligent, sentient, self-aware intelligence. I thought to myself, "Nah, you can't have real intelligence in a computer. The only thing computers can do is follow strict instruction!" But then I began thinking, and started to get pretty existential. What exactly is intelligence?

Let's say that this model of a brain does work. It gets every single neuron right, and every connection between them right. If it gets everything right, then one can reasonably assume that it would function like a human brain and be intelligent. But what would that mean for us? Simple: It would mean that every single one of our thoughts, feelings, actions, ideas, etc. is governed, on some level, by simple, inescapable commands: When neuron A pulses1, neuron B does, too. When neuron D pulses1, neuron a synapse1 forms between it and neuron C. So then, what is free will?

But on the other hand, consider this: Thousands of man-hours are spent writing the most complex code ever written, on the largest supercomputer ever built, and when it's turned on, nothing happens.3 Every detail is perfect, every neuron, every synapse, everything. But it's not intelligent. It just sits there. What would this mean? What makes us special?

Note: Please respect everyone's opinions, and answer constructively. The internet has a tendency to devolve into flame wars from much more innocent topics than this, but I trust the maturity of the KSP community. Thank you!

Footnotes:

1. This probably isn't the right word, but I hope you know what I'm talking about.2

2. I know vaguely how the brain works, I just don't know the words. I swear!

3. Then, after a few seconds, nothing continues to happen.4

4. I just finished reading Hitchhiker's Guide to the Galaxy. Good book.

5. Wait, how did you get here?

Link to comment
Share on other sites

Most everything I post in this forum is pure speculation and often incorrect, but this one time can confidently say "Noone knows."

Human psychology shows most of us to be creatures of habit and on some level predictable, but our behaviors can be modified and changed both at will and against it to become extremely unpredictable. (See Pavlovs Dog).

To say a computer can't have a human brain because any simulation is simply synthetic I personally belive is a cop out. A simulation of that magnitude would become a emulator, and I will say that personally I would view a perfect emulation of a human brain as the real deal. Why not? It can do everything the real one can and it would have all of the same handicaps and quirks.

Link to comment
Share on other sites

"What exactly is sentience?" is one of those philosophical navel-gazing questions that no one really has a good answer to, because no one really understands intelligence well enough to form the appropriate questions, let alone solve for 42.1

There's been a lot of pseudo-answers over the years, like Hofstadter's argument that sentience is a series of feedback loops, or the idea that sentience is the ability to maintain models of phenomenon in your head (including models of your own thought processes), but really you either have to frame things narrowly enough that the answers are self-evident ("self-awareness" defined as the ability to recognize yourself in the mirror, and what not) or you just have to admit that we don't have any clue.

One would hope that the ability to build a working model of a human brain would go a long way towards solving these questions - it would give us something to poke at and prod with sticks in a way that would be unethical for us to do on living humans.2

1What do you get if you multiply six by nine?

2Of course, if it really were a faithful reproduction of a human mind, why would poking it with a stick be any more ethical than poking a real human with a stick?

3...and some good bits about frogs.

Link to comment
Share on other sites

I believe sentience or having a working mind is for the "thing" to ask a question on its own without any prior stimuli, for example, apes who have been taught sign language don't ask "what is this?" when left alone, but when a human asks "would you like food" the ape says "yes" back, however it doesn't go up to the human and ask "can I have food" on it's own.

Link to comment
Share on other sites

'Sentience' is the thing that proves there's a fundamental difference between humans and 'animals' intellectually, not just us being farther ahead on a scale. It's impossible to give a proper definition because it's something that probably doesn't exist, and of which there are only vague guesses on the nature of. There used to be quite 'hard' definitions, e.g. 'can make use of tools, and produce themselves', but they all got scrapped after people pointed out there were 'animals' could do in fact do all of them.

Link to comment
Share on other sites

I believe sentience or having a working mind is for the "thing" to ask a question on its own without any prior stimuli, for example, apes who have been taught sign language don't ask "what is this?" when left alone, but when a human asks "would you like food" the ape says "yes" back, however it doesn't go up to the human and ask "can I have food" on it's own.

Interesting opinion. I have read that before, about the apes. The article I read mentioned something called a 'Theory of Mind' that people have, but most animals1 do not have. It is the ability to understand that other people have a completely independent mind from yours. Apes don't have this, so they don't know that the person to whom they are 'talking' knows things that the ape doesn't. This is why they never ask questions, because if they don't know the answer, they don't realize that other people might.

People don't have a Theory of Mind until a few years of age, so some experiments have been done with very young children where a person tried to convince the children that they (the adult)2 prefer broccoli over chocolate cake, but the children couldn't understand that someone could have a different opinion.

So, maybe you could say that sentience = Theory of Mind. I like this definition, because if a species (like humans) starts asking questions, then eventually they'll get to where we are now (trying to unravel the mysteries of the universe), and beyond (I don't know what's beyond, it hasn't happened yet). Which is, I think, a pretty good definition.

it would give us something to poke at and prod with sticks in a way that would be unethical for us to do on living humans.2

2Of course, if it really were a faithful reproduction of a human mind, why would poking it with a stick be any more ethical than poking a real human with a stick?

This is an interesting point. I guess you could say we could easily back it up, being a computer, in case we broke something. But if, again, it was a faithful reproduction, then it would feel pain.5

Footnotes:

1. I don't remember, are humans the only known animals with this ability, or are there others? I know apes don't, but I wouldn't be surprised if dolphins did. Some of them are pretty intelligent.

2. I am a bad writer. I tend to use pronouns in the most confusing possible way, and then am unable to fix it. I can speak just fine, I swear! :confused:

3. I like using footnotes.4

4. I realize that nothing leads to this3 footnote, so it's probably pretty lonely.

5. I just realized how cool it is the technology has come so far that we can have a legitimate conversation about sentient computers that feel pain. But still no flying cars...6

6. I would actually prefer a driving plane, but flying cars are more cliché.7

7. I'm sorry if I used the wrong accent mark. I only speak English, and though our grammar and spelling (and language overall) is spectacularly convoluted and terrible, we don't ever use accent marks.8P

8. Oh dear, I appear to have become addicted to footnotes and now I can't stop.

P.S. I just realized how few posts I have. My account was lost in the April catastrophe, in case anyone was wondering.

Link to comment
Share on other sites

'Sentience' is the thing that proves there's a fundamental difference between humans and 'animals' intellectually, not just us being farther ahead on a scale. It's impossible to give a proper definition because it's something that probably doesn't exist, and of which there are only vague guesses on the nature of. There used to be quite 'hard' definitions, e.g. 'can make use of tools, and produce themselves', but they all got scrapped after people pointed out there were 'animals' could do in fact do all of them.

This.

On a lighter note, anything Captain Kirk will bed. :P

Link to comment
Share on other sites

"Nah, you can't have real intelligence in a computer. The only thing computers can do is follow strict instruction!" But then I began thinking, and started to get pretty existential. What exactly is intelligence?

Suppose deep down that's all our brain is doing too?

But on the other hand, consider this: Thousands of man-hours are spent writing the most complex code ever written, on the largest supercomputer ever built, and when it's turned on, nothing happens.3 Every detail is perfect, every neuron, every synapse, everything. But it's not intelligent. It just sits there. What would this mean? What makes us special?

What's the right question? It might mean that your model made faulty assumptions, for instance are you just modeling neuron action potentials (relatively easy), or are you modeling in high fidelity all of the complex molecular underworkings of each individual nuron and insulating cell, sodium channels and all that included (not at all close to possible with current technology).

If you made a literally perfect copy of a brain, indistinguishable in all physical characteristics from the original, and negating stochastic effects it did not function the same way as the original, then it would imply that the brain's function can not be fully described in terms of natural physics... spooky... But do we really have any reason at all to suppose this given our knowledge of biology? It there some phenomenon which we observe for which a physical explanation is impossible?

'Sentience' is the thing that proves there's a fundamental difference between humans and 'animals' intellectually, not just us being farther ahead on a scale.

It gets tricky huh? The thing about observing sentience from the outside it that it is entirely subjective. People have posited things such as the Chinese room conjecture to argue against machine intelligence. Basically the argument is that even if a computer was made that could pass the Turing test (any arbitrarily sophisticated Turing test even), that computer still wouldn't be necessarily sentient. It's a double edged sword though. If you've disproved machine sentience using the Chinese room, then you are going to have a really hard time dealing with the fallout: namely that the logic can be just as easily applied to biological entities as well. For instance you might see a person, and by observation be led to believe that he is sentient, but he is really dead on the inside. Unfortunately we cannot 'look inside', and directly observe sentience, we can only observe outside signs and behaviors we associate with sentience, so we can't actually tell for sure one way or the other. For instance whether sentience actually does separate us from other animals in any non-tautological sense. Either way the implications are discomforting.

Link to comment
Share on other sites

It gets tricky huh? The thing about observing sentience from the outside it that it is entirely subjective. People have posited things such as the Chinese room conjecture to argue against machine intelligence. Basically the argument is that even if a computer was made that could pass the Turing test (any arbitrarily sophisticated Turing test even), that computer still wouldn't be necessarily sentient. It's a double edged sword though. If you've disproved machine sentience using the Chinese room, then you are going to have a really hard time dealing with the fallout: namely that the logic can be just as easily applied to biological entities as well. For instance you might see a person, and by observation be led to believe that he is sentient, but he is really dead on the inside. Unfortunately we cannot 'look inside', and directly observe sentience, we can only observe outside signs and behaviors we associate with sentience, so we can't actually tell for sure one way or the other. For instance whether sentience actually does separate us from other animals in any non-tautological sense. Either way the implications are discomforting.

I've never thought of it that way. It's almost like observing a train traveling at the speed of light. From an observers perspective, they would see everything inside the train as unmoving(if I recall my relativity physics basics). As such, a rock, while seemingly unmoving, unliving, and unperceptive, may actually be a sentient being that just traverses time at a different rate.

Link to comment
Share on other sites

I always thought of sentience as having the capability to create thoughts (that is, being able to think in your own language). That's just me though. From what I can see in this thread, there is no universally accepted definition.

You mind linking me to that wired.com post. I remember seeing a T.V. show about some computer scientists who were doing what you are saying, so I want to see if it's the same guys, and either way, see how far they have come along.

Link to comment
Share on other sites

People have posited things such as the Chinese room conjecture to argue against machine intelligence. Basically the argument is that even if a computer was made that could pass the Turing test (any arbitrarily sophisticated Turing test even), that computer still wouldn't be necessarily sentient. It's a double edged sword though. If you've disproved machine sentience using the Chinese room, then you are going to have a really hard time dealing with the fallout: namely that the logic can be just as easily applied to biological entities as well. For instance you might see a person, and by observation be led to believe that he is sentient, but he is really dead on the inside. Unfortunately we cannot 'look inside', and directly observe sentience, we can only observe outside signs and behaviors we associate with sentience, so we can't actually tell for sure one way or the other. For instance whether sentience actually does separate us from other animals in any non-tautological sense. Either way the implications are discomforting.

The problem with the Chinese Room Conjecture and similar arguments is that it presupposes that the type of pattern recognition being done by the human with the instruction booklet is not, in fact, the primary building block of sentience (and that therefore anything capable of replicating that sort of pattern recognition would be sentient). This seems to be a rather far-fetched assumption, since the one thing that human brains are really, really good at (and far exceed computer capabilities at present) is pattern recognition.

(Well, it's also pre-supposing that there exists an instruction booklet that a human could follow that would allow them to trick a Chinese speaker into believing that they spoke Chinese. Such instruction booklets do in fact exist, and are typically presented in a series of semester-long courses, the first of which typically title themselves variously as Introductory Chinese, Chinese I, and Chinese 101.)

This is an interesting point. I guess you could say we could easily back it up, being a computer, in case we broke something. But if, again, it was a faithful reproduction, then it would feel pain.

The question123456 here is "do we base our ethical system on life" or "do we base our ethical system on intelligence?" If we decide that the salient property as regards our system of morals is whether or not an entity is a "flesh and blood" human, then there is no problem with poking and prodding a computerized replicate regardless of its putative ability to feel pain. If we decide that intelligence is the salient property, then it would be unethical to treat any "sentient" computer system in a way we would not so treat ourselves, even if we knew for certain that that system were incapable of feeling pain. While I am firmly in the latter camp as regards the basis of morality, there does not appear to be any a priori reason why one choice would be more appropriate than the other. Indeed, evolution tends to suggest that picking the former is the superior option (insofar as species which choose the latter might quickly cease to be species when confronted with entities which chose the former); although as an aside I should note that thinking similar to the former choice has been used to justify some truly horrible events in our past.

1-6I was going to build a footnote labyrinth here, a la XKCD, but, as it turns out, I'm far too lazy to actually put in that much effort.

Edited by Stochasty
Link to comment
Share on other sites

So, there are a few things I'd like to say about this... :)

Firstly in a strictly purist sense I do not believe that there is anything particularly special to distinguish the laws of nature that apply to me over, say, this chair I am sitting on. Life is basically unmeasurable and unquantifiable if you are looking for anything beyond a certain accuracy, and so I believe is sentience as a property of life.

Its a very useful label - the example that springs to mind is the pile of sand. As we add grains one by one we can never say when they become a pile - we can conclude (wrongly) that therefore the pile doesn't exist. It does infact exist, but it is a useful approximation to reality that we employ because stating precise numbers of sand is impossible - we have some ballpark idea that we call a pile.

The same is so with life and its many properties. Some are defined 'precisely' in terms of chemistry or other criteria, but even those can be challenged in this way... right the way down to constituent parts and even the logic and background for our theories. I've seen very reasonable arguments that any complete theory of everything (with some assumptions about how good GR and QFT are...) has no precise concept of 'vacuum' or 'state', that measurements become meaningless at the tiny scale, things like space and time lose meaning and are emergent from something else - or even the concept of countability being unusable (and hence lots of axiomatic set theory, on which we base almost all of our mathematics and science).

Then there is the question of what we think of as life in concrete examples... What about prions, which are tiny protein fragments which will replicate in vivo displaying a key property of what we consider life - to reproduce? Virions which are essentially giant molecules and do the same, maybe more by encoding for genes that affect their hosts... or viruses which are the same but happen to be encapsulated in protein that their RNA/DNA encodes the genes for assembling... and what about bacteria and so on? There are also stranger things like Spiegelman's monster... a strange result of forced evolution containing 'merely' a few hundred atoms, but able to replicate itself in the presence of RNA replicase? Where do we draw the line? What about software? By some measures computer viruses can be considered alive - and thanks to radiation, error tolerances and various other quirks of nature they are now evolving, live, in an artificial ecosystem. 'Artificial' is another one of those words though - where do we draw the line? How is my house or a factory any less natural than an ant hill or a beehive? How derived does a tool need to be before it stops being natural? On the flipside how can we justify not considering prions, virions, Spiegelman monsters or computer viruses to be life?

Aside from this I don't think you really know how the brain works - lets be honest here. No offence, its just that whilst neural networks are certainly capable of learning and are powerful tools for this, they are not the only functionality in the brain - there are many functions which are poorly understood on the cellular level involving the ion channels and their self-regulation which have effects on the neuron firings. The glial cells are also less well understood than the neurons, but are known to be important and at least partly responsible for brain function. I think the best neuroscientists will even deny having a good or near complete understanding of the matter and medicine is one of the more 'dangerous' of sciences for constant changing its mind and not being able to easily carry out good quality of experiments to pin things down (not always such a bad thing...)

There are plenty of people working on artificial neural networks and machine learning in general btw, and making things that can be considered intelligent has already happened, arguably going right back to the perceptron. In fact I believe the human neural summing function is known (or at least some generalisation or approximation of it) but being the result of chemical processes which come out of messier math than we have even invented yet it is a monsterous thing to calculate using classical computing resources. I wish I could quote a source for this because a figure here would really help pin down exactly how feasible it will ever be to build a 'true' replica of the neural network in the brain using classical computers.

I do not subscribe to free will or any interpretations of quantum mechanics (including those which claim to have uncovered the magic of free will) - I prefer the "it just works and the abstractions I use in my daily life are as useful for describing this as 'pile' is when discussing sand" interpretation. As a random aside am I the only one who finds spooky action at a distance far more reasonable than infinitely small things that must occupy the same point in space and time to communicate...

That being said I have a rather dull outlook on life it could be said - at the same time I can't really tell that I don't have free will and this illusion is rather convincing and works pretty well to an every-day approximation... I won't complain that the universe sees fit to allow us these comforting delusions of self-importance - it helps me feel smug when 'I' am successful. It also makes it a little easier somehow to not be worried about the meaning or consequences of approaching, asking or answering these 'deep and meaningful' questions. :)

EDIT: nearly forgot - referring to the 'question asking' criteria. What about animals who learn to perform one behaviour when they expect you to perform another in response? This requires no great intelligence - and in fact what is there to say that asking questions isn't learned behaviour in humans because it provides us with reward? Most obviously we get attention when we receive an answer. You never asked a guy/girl where she worked or some other mundane question only because you wanted his/her attention for a few more minutes whilst you think of something worthwhile to say? For example...

Edited by jheriko
Link to comment
Share on other sites

The same is so with life and its many properties. Some are defined 'precisely' in terms of chemistry or other criteria, but even those can be challenged in this way... right the way down to constituent parts and even the logic and background for our theories. I've seen very reasonable arguments that any complete theory of everything (with some assumptions about how good GR and QFT are...) has no precise concept of 'vacuum' or 'state', that measurements become meaningless at the tiny scale, things like space and time lose meaning and are emergent from something else - or even the concept of countability being unusable (and hence lots of axiomatic set theory, on which we base almost all of our mathematics and science).

I'd be careful about trying to use the lack of a "theory of everything" in an analogous fashion to our inability to define intelligence. We physicists go to a lot (and I do mean a lot) of effort to make sure that every question we ask is well-defined and has a quantitative answer (even the string theorists, although only barely). Space and time might be emergent properties, but once we have a working theory we'll know exactly how they emerge, and on what scales we can consider "spacetime" to be a good approximation. (I should mention that my current area of research is exactly this question.)

Same thing for life, in principle. The rules of Go are quite simple, but they encode the emergence of life and death. Same with chemistry, once we get clear on our definitions.

And so, eventually, it shall be for intelligence (thus, maybe, in the end, your analogy wasn't such a stretch after all).

Link to comment
Share on other sites

I'm gonna save you all from the long rambling post that I wrote on sentience vs sapience, and I'll summarize the parts that were even halfway interesting.

To say that an intelligence can't be predicted is to say that either we lack the knowledge to make that prediction, or there's something non-deterministic going on. Unless there's something going on at the quantum level, there's not a lot of room to say that the chemistry of the brain is non-deterministic (though the level of detail that would be necessary to make a prediction probably goes beyond just the state of the neurons, since chemistry can affect thought).

This kind of leads to the assertion that if we truly have free will, then it comes from something beyond our physical body.

At which point, my free will simulation triggers it's coping mechanism and makes an attempt at humor.

When I was in college (computer science), friends would ask if I thought that we'd ever have artificial intelligence. I'd always look at them and say "Why not? Anyone that works with computers half as much as I do can tell you we've already got artificial attitude."

Link to comment
Share on other sites

Unless there's something going on at the quantum level, there's not a lot of room to say that the chemistry of the brain is non-deterministic (though the level of detail that would be necessary to make a prediction probably goes beyond just the state of the neurons, since chemistry can affect thought).

Some of you guys might be interested in the CBC's "Quirks and Quarks" radio science magazine show's segment on quantum biology from about a year ago. They interviewed Dr. Seth Lloyd of MIT, Dr. Greg Scholes of the University of Toronto and Dr. Jennifer Brookes at Harvard University's Department of Chemistry and Chemical Biology. There are links on the episode's web page to a Nature article and a Wired article about quantum biology. The segment and articles aren't directly related to brain chemistry, but if quantum effects are starting to be observed in some areas of biology then they are bound to appear in a lot more as we gain more knowledge.

Link to comment
Share on other sites

We can determine whether or not a human being is sentient, without having a precise definition of what sentience is.

Why would that not apply to an artificial brain?

What is needed before an artificial human brain can be build, is that it is understood how it works. And with that there is still a long way to go.

Link to comment
Share on other sites

How would you define empathy? There was a heuristic program that existed a while back that could have any sort of rule-of-thumb programmed into it. One set of rules it was given was to be a psychologist or psychiatrist, and to say neutral or (I think) empathetic phrases, and many real patients enjoyed this computer as their shrink. So, would you have considered that to be sentient?

Link to comment
Share on other sites

I'd be careful about trying to use the lack of a "theory of everything" in an analogous fashion to our inability to define intelligence.

This isn't my intent - I was trying to draw analogy between 'life' and 'sentience' and things like 'space' or 'time', rather than analogising with some unknown theory of everything - i.e. as things that only make sense in coarse, practical day-to-day descriptions and not when you investigate them more closely.

The alternative analogy with the same meaning is perhaps that a theory of intelligence is analogous to classical physics... it only makes sense approximately, and only at a very coarse scale.

Sorry, just being nitpicky. :D

Link to comment
Share on other sites

'Sentience' is the thing that proves there's a fundamental difference between humans and 'animals' intellectually, not just us being farther ahead on a scale.

But we ARE just further ahead on a scale. There's no fixed limit, above which you can be said to be sentient, as far as I know.

It's like sexual preferences, it's scientifically meaningless to talk about people being straight or gay. It's all a sliding scale, with most people being near the "straight" end, some near the "gay" end , and a few closer to the middle. I imagine sentience the same way, with ringworms near the bottom, apes somewhere near our end of the scale, and us near the top.

Link to comment
Share on other sites

'Sentient' is an illusion. It's semantics for human beings asking a lot of questions. The human brain is nothing more than one of the most complex things ever to exist and one of its byproducts is that we feel we are in control of our actions and feel we are capable of independent thinking.

Theoretically we could simulate a human brain from a computer, however we lack the knowledge and technology. Most of the basic functions of the human brain, the 'basics', we understand. A lot of research has been done in artificial intelligence using something called 'nodes'(this is a software implementation). But the real limitation of a computer is that it's built binary, in contrast with the human brain which has thousands of connections per neuron and can change those synapses to learn. So yeah, science is far from developing real sentience.

I will reference on of my favorite scifi authors here again, Michael Crichton: Prey.

Link to comment
Share on other sites

Some interesting responses here. It's interesting to see philosophy, science, and maturity all in one place. And on the internet, of all places!1:sticktongue:

I thought of another, related question: I read a snippet from someone explaining why this fully functional replica of our brain could never work. They said that any intelligence will never be able to fully understand itself. I don't know where I could find this, nor can I explain it as well as they did, but just take my word for it that it was a compelling argument. What is your opinion on this idea?

I personally believe that it makes sense. However, the human race as a whole has a far greater intelligence than a single human. We have an extraordinary ability to cooperate (when we're not killing each other). Look at the Human Genome Project: Thousands of scientists worked together to accomplish a goal that many thought would be impossible, and that indeed no single person could have done. This project that I mentioned in the OP is a similar project drawing together hundreds, and hopefully in the future thousands, of neuroscientists. Perhaps no single one of these scientists, or anyone, can fully understand and comprehend their own brain. But each one could work on a small segment, and all of this data could be amalgamated. While no one person could understand the entirety of this data, the fact that such a database (hypothetically) exists would prove that the human race as a whole understands an individual brain. Kind of like swarm intelligence.

But we ARE just further ahead on a scale. There's no fixed limit, above which you can be said to be sentient, as far as I know.

I agree with you on this, but Kryten has a point. There is a fundamental difference in the minds of Humans from all other animals. You could say this is just another step along that scale, but it's a fairly significant step.

1. I guess I've just had some bad experiences on some less-mature forums. *cough* Minecraft forums *cough*

Edited by itstimaifool
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...