Jump to content

Sentient Computer


Voyager275

Recommended Posts

I read it but it doesn't state what kind of processing power would be needed if sentience is based of information processing speed

Which may be too hard to guess.

The human brain is massive parallel, which every neuron being some kind of processor in itself (insofar as it receives input, gives output to connected cells as a result of the input (and or own underlying processes) and also changes its own structure as a result of received input.

ANNs are an approximation to real neuronal networks, but so far they are tiny compared to the size of the biological neuronal network of the human brain.

And so far we understand only a little bit of the processes that generate human consciousness/sentience (and part of it may be not in programming, but rather in the actual learning experiences humans have when growing up).

Therefore I´d say it is too early to make predictions on the processing power needed for artificial sentience

Link to comment
Share on other sites

First we need to define sentience.

From Oxford dictionary:

Definition of sentient in English:

adjective

Able to perceive or feel things

which is a rather poor definition for the purposes of this discussion.

A microcontroller with a simple light dependent resistor or a pressure plate is "able to perceive or feel things". Would such a device be called sentient?

Link to comment
Share on other sites

People tend to define sentience as feeling in the same way they do. But also extend it to animals that feel in a similar way.

Though people tend to define sapience as thinking and feeling in the same way a person does. As animals are observed to be different, we do not apply this ability or quality to most, if not all, animals.

This may give us definite cut offs between a person and a bird and a fly and a flower and a rock. Or it may give us gradual separation and differences.

However, until we know how to exactly model one or the other, or all of them, it's rather hard to say and AI is "the same" or not. So once we can model (if ever) human thinking and feeling, once we model animal thinking/reactions and feelings, then we can say yes or no to "we can do the same with a computer".

Theoretically, the answer is always "yes". But there may be practical limitations to what we can construct, see the Rocket Equation for where we hit physical limits. We may just be unable to hold onto a brain and read it's patterns/arrangements long enough without damaging them, to be able to record and copy them. Or we may just be able to make a clock work fly and a clock work fish and a clock work person. Clock work Turing machines (computers) are totally possible, as as said, some things are too impractical though! So it may be too impractical to build a silicon "brain" just as it's too impractical to build a clock work "brain" or clock work "Deep Blue" ( https://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29 ).

For now the answer has to be "unknown", until we do more research or get wiser. :)

Edited by Technical Ben
Link to comment
Share on other sites

Maybe sentience is just the sum of its parts, e.g. with the ability to perceive and feel, process this incoming data and analyze the consequences of actions based on it, arises automatically the sense of self-awareness 'in time and space' that humans seem to possess. We become aware of analyzing our environment, instead of acting automatically.

We can't be sure that sentience is a neatly defined quality. Maybe it is entirely illusory. Inevitably, a computer that matches the exact networking of the human brain but with different materials and parts should possess sentience (and other qualities of the human brain).

Link to comment
Share on other sites

...

For now the answer has to be "unknown", until we do more research or get wiser. :)

Indeed, we simply do not know ... yet.

Computing power has been increasing exponentially since its invention. A modern smart phone dwarves the early 1960's room sized monstrosities when it comes down to mathematical power. And if Moore's Law is to believed the end is still several decades in the future.

Combine this with ground breaking development in quantum computers (those CAN do parallel calculations) and who knows where it'll end.

Transistor_Count_and_Moore%27s_Law_-_2011.svg

Link to comment
Share on other sites

Quantum computers though, have a risk of going back to the room sized computer for a singly byte of calculation power.

Plus even with all the computational power in the world (the internet is not far off a single human brain in complexity/potential parts IIRC), we still don't know where to start on the software to run on it. XD

Link to comment
Share on other sites

Quantum computers though, have a risk of going back to the room sized computer for a singly byte of calculation power.

Plus even with all the computational power in the world (the internet is not far off a single human brain in complexity/potential parts IIRC), we still don't know where to start on the software to run on it. XD

The same was true when the first transistor was made. It was huge compared to a modern one and not very practical, certainly not for computation purposes. It took years to miniaturize them enough to make them a viable technology for computers.

Why would you expect quantum computers to skip the initial steps?

Link to comment
Share on other sites

Some people are seriously trying to simulate a complete human brain, eventually. By 2023 the project is supposed to deliver some results ...

https://www.humanbrainproject.eu/discover/the-project/overview

I suppose from a purely materialistic point of view, once it is possible to simulate all the neurons in the brain with all its connections, it would be possible to create a virtual human mind - provided we somehow miraculously get all the connections right. Might have to wait for a little while longer until our computers are powerful enough. We have 1,000 trillion synaptic connections. So i guess we need computer memory of the order of Petabytes to represent them in a simulation. :wink:

Meanwhile we can simulate little worms :Dhttp://www.openworm.org/about.html

Link to comment
Share on other sites

The same was true when the first transistor was made. It was huge compared to a modern one and not very practical, certainly not for computation purposes. It took years to miniaturize them enough to make them a viable technology for computers.

Why would you expect quantum computers to skip the initial steps?

I don't. However, I expect the same problem with QM computers to apply to silicon/transistor ones. Try making either the size of a human brain, and running off the same wattage, without overheating and with at least 50 years lifespan/service life (doable for silicon ;) but not sure on the other requirements).

Doing 1 thing really well is easy. Doing all at the same time? Then we hit natural limits of physics. Biology already works on the atomic scale, it already has a lead start on us, and theoretically may already be using the most efficient, thus only means, to get to it's goal. An example being, we can make jumb jets, but making something fly as well as a bird on the same power requirements with the same maintenance (IE, self maintaining)? We need a bird, not a robot then. :P

Link to comment
Share on other sites

First we need to define sentience.

From Oxford dictionary:

which is a rather poor definition for the purposes of this discussion.

A microcontroller with a simple light dependent resistor or a pressure plate is "able to perceive or feel things". Would such a device be called sentient?

That's a totally crappy definition. I'm very surprised such crap is found in Oxford dictionary.

This. We cant even say with 100% certainity that dolphins (which possess bigger and 'better' brains that we do) are intelligent or not. We need more Science on the subiect :)

Inteligence is not "it exists" or "it does not exist". It is a spectrum. And yes, dolphins possess an amazing degree of intelligence for a mammal, humans excluded. Compared to humans, they're like retarded juveniles with decent motoric skills.

Link to comment
Share on other sites

Plus even with all the computational power in the world (the internet is not far off a single human brain in complexity/potential parts IIRC), we still don't know where to start on the software to run on it. XD

But you also need to consider the recent advances that have been achieved with deep learning algorithms (software) and neurosynaptic processors (hardware).

Perhaps the problem isn't a lack of processing power so much as the lack of an appropriate architecture? IBM's SyNAPSE chip has 1 million neurons, 256 million synapses, is the size of a postage stamp and runs on 70 mW. And that's only the beginning of what is very likely to be a new paradigm in computing.

Edited by PakledHostage
Added link to video
Link to comment
Share on other sites

Also to consider:

What you want, is it a sentient computer? or a sentient program?

the difference is only in semantics, i know. but a very thoroughly discussed philosophical question...

am "I" my thoughts, my feelings, my memories _only_? or is my "hardware" an integral and inescapable part of my "self"?

Personally? I am part of the 'hard AI' opinion, that a deterministic program can achieve consciousness regardless of the underlying hardware - given sufficient complexity. Ever since i read 'Gödel, Escher, Bach'.

Link to comment
Share on other sites

Doesn't such AI need programming as well? Because if you need to program every single variable and it's output into the computer, AI suddenly becomes very unfeasible. I'm pretty sure there is something called distributed processing, in which instead of you putting in complicated processes and commands into one program, many smaller programs with simpler commands essentially create their own goals (this also is modelled after nature, specifically animal behaviours, like termite behaviour)

Link to comment
Share on other sites

Seen Googles latest image processing system? It's scary "clever". It is however limited.

Software problems are not really much different than engineering problems, which are physics problems. Which is to say, we can engineer things "like" other things, but exactly the same, so as to say "this is a person" or "this is a [virtual] worm"?

Take Boston Dynamics PetaMan as an example in engineering. It should be theoretically easy to replicate the mechanical and software systems the human body (or any biological system) uses, right? Well, we can get close, we can do faster or stronger. But can we get all the attributes and qualities? And that's a mainly engineering problem, with power and control mechanics, not a software one with a few billion neurons worth of information to track. :)

PS, I don't watch videos about "scary computers can learn and hit the singularity". Mainly because diminishing returns and limits to hardware apply to AI as much as they do to squishy humans. An AI is as dangerous as any other human, car, bomb or animal. It's not anything "special" beyond that (if AI can self improve at any speed, it will not be as fast as the existing intelligence, people and animals. Unless we magic in magic hardware for it to run on).

Edited by Technical Ben
Link to comment
Share on other sites

Quantum computers though, have a risk of going back to the room sized computer for a singly byte of calculation power.

Plus even with all the computational power in the world (the internet is not far off a single human brain in complexity/potential parts IIRC), we still don't know where to start on the software to run on it. XD

All supercomputers are hall sized. This is hard to change even if we manage to increase cpu power 100 times.

I think we will need new hardware for an sentinel ai, more like quantum computers or neural nets.

Link to comment
Share on other sites

PS, I don't watch videos about "scary computers can learn and hit the singularity".

Your loss... Jeremy Howard isn't some crackpot. Among other things, he's a Distinguished Research Scientist at the University of San Francisco, Data Science Faculty Member at Singularity University and Chief Executive Officer and Founder of Enlitic. The title of his TED talk video may be a bit sensationalist, but he almost certainly knows more than anyone here about the state of the art in AI.

Link to comment
Share on other sites

PS, I don't watch videos about "scary computers can learn and hit the singularity". Mainly because diminishing returns and limits to hardware apply to AI as much as they do to squishy humans. An AI is as dangerous as any other human, car, bomb or animal. It's not anything "special" beyond that (if AI can self improve at any speed, it will not be as fast as the existing intelligence, people and animals. Unless we magic in magic hardware for it to run on).

Not to hype it up too much, but a self improving AI is the nuclear bomb of the information age. It's called an Intelligence Explosion, or Hard Takeoff AI. I recommend the book Superintelligence by Nick Bostrom, although that's more about design strategies for a safe AI than "what does it take to make an AI?".

Consciousness is a Hard Problem.

Link to comment
Share on other sites

Your loss... Jeremy Howard isn't some crackpot. Among other things, he's a Distinguished Research Scientist at the University of San Francisco, Data Science Faculty Member at Singularity University and Chief Executive Officer and Founder of Enlitic. The title of his TED talk video may be a bit sensationalist, but he almost certainly knows more than anyone here about the state of the art in AI.

Many non-crackpots and extremely well educated and professional people claim either FTL, perpetual motion, or magic. It's no loss to me. I've seen it first hand in general life, professionalism in no way protects against disillusion and risk to others (eg banking crisis, most peoples family life).

The runaway problem with AI, is the same as with any system we use, or tool. A car or aircraft has simple "ai" and autopilot. It can malfunction, or cause a problem because we ask it to do a specific thing really well, and we made a mistake on the destination.

So the same applies to AI. A car cannot take over the world can it? A well trained cat or dog? An intelligent person? A car has speed and armour, a person intelligence and craftiness. It's impossible to combine the two and not make a sacrifice on efficiency in one area or the other.

Edited by Technical Ben
Link to comment
Share on other sites

All supercomputers are hall sized. This is hard to change even if we manage to increase cpu power 100 times.

I think we will need new hardware for an sentinel ai, more like quantum computers or neural nets.

Supercomputer size is inversely proportional to available transistor size. If we end up having affordable computers with the power of today's supercomputers, then the supercomputers of that time will still be as big, but much more powerful.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...