Jump to content

"Philosophy will be the key that unlocks artificial intelligence"


Ted

Recommended Posts

I was reading an article in the Guardian, a newspaper in the UK, and it was discussing how the missing property of your average Artificial Intelligence is the General Intelligence of it. In other words, the AI is unable to think about and view the world like a person would. This seemed interesting to me as isn't something you often think about, but when considered, you realise that a lot of fictional AIs did have this subtle property that really added to them.

You can read the article here.

What are your thoughts on it?

Edited by Ted
Link to comment
Share on other sites

i didn't read the article but i do know it is why robots cant be like humans, also emotions are important to philsophy, which makes them nessecary to be human like...

it is like how in doctor who that there are a race called cyber men which a cyber man is a human brain crammed inside a robotic body, and they cant have emotions because the brain is so delicate that it hurts to be like a cyber man because a human brain is only human with emotions still there, so wype out the emotional inhibiter and the cybermen die, so a robot is like a cyber man from doctor who exceapt one is (technically) a living organism, and the other is a machine

Link to comment
Share on other sites

I'm not sure that we can create a true artificial intelligence capable of thinking, in my opinion the best we can do is get them to act like they are human. Even if we could make a true intelligence, how could we tell if it is real or if it is just copying behavior?

Link to comment
Share on other sites

What in the world was Deutsch talking about when he said self-awareness is already possible for software? A program can certainly be recursively responsive to its own state, but that is orders of magnitude away from being able to say, "Cogito ergo sum."

"AGIs will indeed be capable of self-awareness" Not necessarily. Some science fiction authors I've been reading lately (Charles Stross?) have made the interesting point that just as aircraft are not structured like birds despite performing similar functions, there's no reason for an effective AI to resemble human thought. (I'm sure actually AI researchers have said the same thing, but I've encountered it in SF.) Self-awareness is one aspect of human intelligence, but not the only one, and an AI wouldn't necessarily have or need it.

On the subject of science fiction, one of the more interesting worlds is Wright's Golden Age trilogy, where the lines between software, machine, and human are blurry and can be crossed easily.

Link to comment
Share on other sites

Eh, I don't like that test.

You're far from alone. Turing wasn't trying to create an infallible proof of intelligence, though. He lived at a time when people's notions of what constituted "human intelligence" were being seriously challenged. For a long time, it was reason that was considered the defining characteristic of human thought. After all, even animals had emotions, for example. But then machines were created with the ability to manipulate symbols in a way we might call "reason", and do it more reliably than the humans that created them!

So what is this thing we call "human thought", anyway?

Turing's suggestion was his test. If it could carry on a human-seeming conversation in a convincing way, could it be considered to be thinking like a human? Many people decided the test was good enough. Sure, it has holes, but so does every test for human thought yet devised. (That's arguably the biggest obstacle to AI programming; for many things, we're not exactly sure how our own thoughts work. For example, we recognize ourselves in a mirror as nothing more than a reflection; but exactly how do we do that?)

There's an important concept called "meta-reasoning" that I think will prove the key to unlocking artificial intelligence. We humans can examine our thoughts; determine whether our examinations are valid; process whether those determinations might be reasonable; and so on, seemingly ad infinitum (at least in principle). Computers can only examine their own programming to a level that is determined by some elementary instructions somewhere.

Ultimately, the question is whether symbol manipulations can ever be arranged in patterns that resemble intelligence. I don't see any good reason why not (though that's hardly a proof that they can).

Of course, there's also no guarantee that intelligence has to resemble human intelligence. Our own brains were constructed from the inside out, and we still bear our evolutionary heritage in the bugs in our thinking (our tendency to follow charismatic leaders without question, our tendency to assume we're better at things than we really are, our tendency to collect information that agrees with our preconceived notions and discard information that disagrees as irrelevant, and so on). Perhaps AIs won't have those bugs, or will have different ones. Perhaps even emotions and considerations we think of as basic are more intimately tied into the conditions that led to the survival of our ancestors than we currently appreciate.

Link to comment
Share on other sites

The insightful thing about Turing's test is that it's the only proof we have that other people are thinking like we are, let alone machines. Neither can be directly observed, so his suggestion was to use the same method for both.

Link to comment
Share on other sites

Emotions are not important. At all. In fact, I'd strongly argue against them. Emotion gets in the way of rational thought.

What would be more useful is ethics, which are fundamentally derived from logic and observation. I don't think a general intelligence would actually go on some sort of killing spree because of a consequentialist argument (a la I, Robot), because it would also realize the inherent problems with that sort of argument, just like humans realize the inherent problems in that argument when they apply logic.

The Turing test should not be misconstrued as a proof. It really, really isn't a proof. It depends on the infallibility of the human evaluating the other intelligence, which is just absurd.

Edited by phoenix_ca
Link to comment
Share on other sites

p forCurrently we are missing some fundimental base for ai, I believe some day we with enough power, we will be able to emulate human consciousness because it is only a matter of refining the algorithms and models of human being, which like the super computer simulations today, are getting better and better, but at what point does it become conscious? I can't say because you will always know its just a simulation and can be changed,

I've always entertained the idea that ai of the future will be based on mirroring the neuron map of the human brain down to exacting detail, which will be possible with increasingly sophisticated scanning techniques like how DNA mapping has become possible on a chip today,

To say if this is emulating a soul, that's up for the people living at the time, but I do believe it is a hard problem because if you treat them like people, you have an exponentially increasing population of virtual citizens who can all be edited, those decisions are not up to me, its up to whoever births the first true ai and unleashes it on the world

Link to comment
Share on other sites

Emotions are not important. At all. In fact, I'd strongly argue against them. Emotion gets in the way of rational thought.

What's curious is that modern psychological science has come to disagree with you. Research has shown that cognition and emotion are interwoven systems, with emotion helping logic decide what is desirable and what is virtuous.

(In fact, there's been some work done with people who've had damage to their ventromedial prefrontal cortex. People with damage to this area of their brain often have difficulty connecting emotions to how they make decisions or plans. Individuals thus afflicted can make long pro-versus-con lists and can obviously explore the implications of following a particular decision or plan, and in great detail, but never reach a final decision on their own, apparently because they lack emotional weight as some kind of stalemate-breaker that helps us to opt for one or the other.)

Link to comment
Share on other sites

True artificial intelligence, which is indistinguishable from genuine human intelligence, is very possible... and so are artificial emotions (And emotions are incredibly, ridiculously important to a social species like ours... without them we would not have any of the technology or civilization we have today).

To suggest, however, that these artificial intelligences will be operating on devices anything like the computers we have to day is just naive. We cannot begin to imagine the sort of computing devices which will be in existence in 100 year's time... let alone 1000.

Link to comment
Share on other sites

The insightful thing about Turing's test is that it's the only proof we have that other people are thinking like we are, let alone machines. Neither can be directly observed, so his suggestion was to use the same method for both.

I've talked to people, especially online, who I'd be convinced are not thinking like a human. I mean, especially on certain issues, people will happily parrot the same pre-programmed responses over and over even after you explain to them the answer, simply because they don't have anything better and this is what they've been told to say. I mean, ok, if all your looking for is a computer that "thinks" like a human, then fine. But if you want to determine whether it's sentient then clearly this isn't a good test (since a cow is sentient and conscious, but I seriously doubt would be mentally capable of carrying on a human-like conversation even if it was physically capable of doing so) and it's not a good test if you want to test for any useful level of cognitive behaviour (because people often act like stupid machines, to some degree we are just machines).

Not saying we do it all the time of course, but just because you can convince me that I'm talking to a person for 10 minutes doesn't necessarily mean you've done anything particularly special (I mean, it's worthy of note maybe since I don't think that's been done before, but it doesn't mean you have a good AI). Perhaps if you built a robot that acted precisely human all the time I might be more likely to be excited.

Link to comment
Share on other sites

What's curious is that modern psychological science has come to disagree with you. Research has shown that cognition and emotion are interwoven systems, with emotion helping logic decide what is desirable and what is virtuous.

(In fact, there's been some work done with people who've had damage to their ventromedial prefrontal cortex. People with damage to this area of their brain often have difficulty connecting emotions to how they make decisions or plans. Individuals thus afflicted can make long pro-versus-con lists and can obviously explore the implications of following a particular decision or plan, and in great detail, but never reach a final decision on their own, apparently because they lack emotional weight as some kind of stalemate-breaker that helps us to opt for one or the other.)

As I have said before to people, you simply cannot derive a particular course of action from logic alone. You need some sort of emotional/moral basics, to decide what you want to achieve. However, emotions are simply pre-programmed things that have evolved into our brains. They can easily be replaced with any other pre-programmed basic guidelines, emotion is not necessary, if we're talking about creating an intelligence.

Link to comment
Share on other sites

A purely logical method of thinking could be an infalible way of preventing any meaningful accomplishment whatsoever... If logic alone dictates that making no effort to develop new ideas has a lower "cost" than attempting futher optimization even if at minimal risk, then logic will stay put....

A man would not (most could, perhaps... But someone will have that urge to improve on the current affair of things)

Let us consider a creature from a race such as vulcans, no emotions, just reason....

How would such species ever evolve past a minimum survival conditions era? - a logical being would consider death, even of a dear one a natural and necessary occurrence... Surviving until one is grown sufficiently to produce offspring should suffice, theres no reasoning to justify any further expense of "more useful" energy

Intelligence is not about making logical decisions.... If we even know what it is at all.... That we can think about the fact that we're thinking and reflect as to how well we're interpreting the way we interpret stimulae.... It's extremely hard just to define goals to what could be understood as an "intelligent machine"

A large problem with trying to replicate human intelligence would be the undeniable fact that more often that not, humans are NOT intelligent....

Not just by emotion clouding judgement, but frequently failing on the opposite side.... And of course, the fatal combination of failure to think properly in any of either way....

Then there are darwin awards....

I'm a game programmer by trade... AI is just a day at the office for me, but recurringly i find that what feels more human in machine decision-making is not that it acts intelligently, but that it simulates stupidity :)

Once i inadvertedly produced a highly believable RTS opponent AI - you never knew what it would do next, it seemed to sometimes send out units to scout ahead or even to "spy" on you and turn back.... Or it would just attack violently in some manner that appeared either as a greatly considered plan, or at times as a poorly antecipated turn of events....

All this fantastically human strategic machination was brought up by a unique concept:

it was built only for demonstration, so the decisions of where to send platoons were simply - random!

AI is an illusion.... And sometimes i wonder of the natural type isn't one as well

Cheers!

Link to comment
Share on other sites

I don't see any compelling reason why an AI that is entirely different from human beings would necessarily need to be like us. To place such constraints on a hypothetical other intelligence is rather short-sighted. There are undoubtedly possible intelligences greater than ours that are so far detached from our own that we could not (at least intuitively) understand them. Thus I don't see why any AI necessarily needs to have anything that at all resembles human emotions. The arguments brought forward in favour of intelligence requiring emotion are based on evidence gathered from human minds, not non-human ones.

I can grant you that some level of controlled and interpreted randomness may well be a requirement for another mind to function, but not emotion, at least not in the classical sense. Emotions drive persons to do incredibly inept and illogical actions, like killing others in fits of passion.

The argument that emotions are somehow required to determine what is virtuous, good or evil, also seems to me to be a continuation of the somewhat misguided assertion by Hume that one cannon derive an ought from an is. Well, no, we can use logic and science to aid in the determinations of what is moral, and better yet, it can possibly be done in a way that is universal to all minds, or rather, all complex systems capable of self-analysis and analysis of the universe.

Edited by phoenix_ca
Link to comment
Share on other sites

p forCurrently we are missing some fundimental base for ai, I believe some day we with enough power, we will be able to emulate human consciousness because it is only a matter of refining the algorithms and models of human being, which like the super computer simulations today, are getting better and better, but at what point does it become conscious? I can't say because you will always know its just a simulation and can be changed

And you think that humans could not be 'changed' given sufficiently advanced technology?

I've always entertained the idea that ai of the future will be based on mirroring the neuron map of the human brain down to exacting detail, which will be possible with increasingly sophisticated scanning techniques like how DNA mapping has become possible on a chip today,

To say if this is emulating a soul, that's up for the people living at the time, but I do believe it is a hard problem because if you treat them like people, you have an exponentially increasing population of virtual citizens who can all be edited, those decisions are not up to me, its up to whoever births the first true ai and unleashes it on the world

First of all, are you sure that WE have 'souls'? What exactly is a 'soul'? What makes you sure that a 'soul' is not just an emergent property of a sufficiently complex, adequately organised neural network, natural or artificial?

As I said previously, given advances in neural imaging, in the understanding of the functioning of our own brains (that will be necessary if we want to create a true AI IMO - How can you recreate something you don't really understand yet?) and other technology, it will probably be no more difficult to 'edit' a person's brain and change whatever you want. The only thing preventing people from doing that will be laws and ethics, and the same should apply to virtual citizens...

Link to comment
Share on other sites

The problem I think we have is that we're trying to run before we can crawl with AI. Human intelligence doesn't spring forth fully formed in one jump and yet that is precisely what we try to make AIs do.the human mind grows gradually from somethnig less developed than even simple AIs when we're in the womb.

In addition, our intelligence is adapted to operating a human body, interacting with a human world and interacting with other humans. An AI, grown organically would obviously develop a very different intelligence for the world it finds itself in.

Perhaps this is how we should let our AIs develop, letting them slowly, surely piece themselves together allowing even fatal mistakse to add up over the period of a decade or two in the presence of other developing AIs. We might not recognise the intelligence we create but we have to be understanding parents to our technological children and let them develop and grow outside our narrow expectations.

Link to comment
Share on other sites

As I have said before to people, you simply cannot derive a particular course of action from logic alone. You need some sort of emotional/moral basics, to decide what you want to achieve. However, emotions are simply pre-programmed things that have evolved into our brains. They can easily be replaced with any other pre-programmed basic guidelines, emotion is not necessary, if we're talking about creating an intelligence.

Happiness and sadness in and of themselves may be givens of the human condition, but when and how they are felt is hardly a simple pre-programmed thing with a simple and consistently easily identifiable set of triggers. Small changes can affect people and how they feel and think greatly. What preprogrammed guidelines, exactly, do you propose to implement in this hypothetical artificial being to the extent that it will be possible to create intelligence? Intelligence we can recognize as such?

Link to comment
Share on other sites

I'm a robotics student with a strong interest in philosophy so I've thought about AI quite a bit over the last few years. This is a long one but hopefully someone reads it :)

Firstly the 'intelligence' part of AI is a very controversial word as intelligence is incredibly easy to create artificially, I mean my phone is perfectly capable of intelligently interpreting my actions and acting on them and in that way we have a form of AI in just about all computing devices. This is where the concept of 'strong' and 'weak' AI comes from. So 'strong AI' is basically actually artificial consciousness while 'weak' AI is the kind of stuff that google's search is using or your credit card company uses to detect spending patterns that suggest you've had your details stolen.

Weak AI is certainly possible, and is being used widely today and will lead to huge advantages for humanity. There's not a whole lot that 'strong' AI can do that 'weak' AI can't do just as well or better. This is why there's so much more funding and research going into weak AI.

But to be honest, strong AI is the interesting one. In this case you're aiming to actually create an entity you could consider conscious or alive. So my thinking for this is that to judge if this is even possible, we need to look at the absolute basics of consciousness.

So I think we can all agree that we're not the only conscious animal on earth, obviously dolphins, great apes, etc, are easily recognisable and testable as conscious. They're not as complex or as advanced as us however, so where's the cutoff where an animal's not 'intelligent' enough to be considered conscious? Well my opinion is that this is a spectrum, there's many areas to consciousness but mostly we can fit everything along a scale from not conscious to more conscious (as a side note, this would mean that we humans are not as 'concious' as is possible!).

So what's the key thing that decided your place on the 'consciousness' scale? Well, the number of neurons you've got in your brain seems to be roughly related (the brain structure and configuration is pretty important too). So what is it about more neurons that makes things more intelligent? Well a neuron is pretty simple and can't do much alone but as a collective they can do a lot more. So you've got a large number of simple little machines working together that creates the consciousness phenomenon. This suggests that the key to our consciousness (and other animals) is the sheer complexity of our neural network that gives rise to intelligence (the combination of many simple components reacting with each other giving rise to a collective intelligence).

Now we're not perfect conscious beings but we're relatively very intelligent (and conscious) compared to a plant. So the key to this strong AI will be to something that can fit along this consciousness scale. Now here's my main argument for why it's impossible to have true strong AI on a regular computer:

If you took a PC like we have today (running on binary computations) and just made it infinitely fast and small and stuck it in a persons body and hooked up all the control systems, so it's basically a robot with a biological body, then is there any software you could run on it that would qualify it as conscious? The fact that all of the actions/thoughts of this robot would be defined in code means that any consciousness it shows would actually be a simulation of consciousness. Maybe that counts, I can't say for sure but my gut feeling is that this doesn't because it's not actually a complicated system. There's no complexity in a binary computer, the instructions it performs on the code are simple, it just does them quickly. So rather than lots of simple components working together to create a complex system, we have one simple component running extremely quickly to create the appearance of a complex system.

Now here's the tricky bit, say you swapped the computer in the robot with a collection ob billions of little computers all wired up together like neurons? Does that count? My thought would be that it's farther along the spectrum than the original computer but not quite as far as a regular biological brain. The reason is that our neurons are each a collection of a huge number of much simpler mechanisms (chemical reactions).

So in conclusion, no I don't think philosophy has a damn thing to do with creating strong AI, there's no code to be written (or at least not much). Really it's just about creating an artificial recreation of the brain using simple mechanisms in a high quantity, then working out how to create the 'spark' that starts the whole thing moving so that, from the self perpetuating chain reaction between the components, a consciousness arises.

The good news? Others agree with me and there are several projects underway to do something just like this, it's early days and there's not a lot of money in it though so don't expect quick results but they'll keep slowly moving along that consciousness spectrum until one day in the future they have something nobody can deny is alive.

Link to comment
Share on other sites

I was reading an article in the Guardian, a newspaper in the UK, and it was discussing how the missing property of your average Artificial Intelligence is the General Intelligence of it. In other words, the AI is unable to think about and view the world like a person would. This seemed interesting to me as isn't something you often think about, but when considered, you realise that a lot of fictional AIs did have this subtle property that really added to them.

You can read the article here.

What are your thoughts on it?

I agree on this once you have something that's questioning its own existence then you've got intelligence and I firmly believe that the first man made thing that will get this intelligence will be the Internet just because its connected to everything and everything about everything is on it.

But then again could happen through 'evolution' like thing where as time goes on it will start becoming self aware ( cough SKYNET )

Link to comment
Share on other sites

If you took a PC like we have today (running on binary computations) and just made it infinitely fast and small and stuck it in a persons body and hooked up all the control systems, so it's basically a robot with a biological body, then is there any software you could run on it that would qualify it as conscious? The fact that all of the actions/thoughts of this robot would be defined in code means that any consciousness it shows would actually be a simulation of consciousness.

What if you have an infinitely fast and small binary computer that is running a simulation of all neurons in a person's brain? If the simulation is accurate enough, shouldn't it allow conciousness?

Link to comment
Share on other sites

that might be one of the biggest questions known to man... together with "are we alone?"

but then, scientists everywhere are more and more agreeing that most likely, we are not alone (tho why any other intelligent race would ever wanna play with us morons is another matter)

define "conciousness" - is it just a matter of knowing one's ability to reflect and be a critic of one's own thought? or is there more to it?

how would we ever know if it's safe to unplug an "intelligent machine" without "killing" it?

there are books and books and bad movies^2 about this....

Edited by Moach
Link to comment
Share on other sites

A purely logical method of thinking could be an infalible way of preventing any meaningful accomplishment whatsoever... If logic alone dictates that making no effort to develop new ideas has a lower "cost" than attempting futher optimization even if at minimal risk, then logic will stay put....

No, but it doesn't. Go ahead and purely logically justify lowering the "cost", minimizing risk etc.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...