Jump to content

"Philosophy will be the key that unlocks artificial intelligence"


Ted

Recommended Posts

What if you have an infinitely fast and small binary computer that is running a simulation of all neurons in a person's brain? If the simulation is accurate enough, shouldn't it allow conciousness?

That's a rather philoshophical question. :P My thoughts on the matter would be "Yes" despite the previous poster's disinclination to respect/accept philosophical input on the matter (even if philosophy isn't the key to making AGI, it certainly can inform its creation and growth).

I've oft used that particular example myself as a counter-example to the suggestion by many animal rights activists that we should, at some point in future, be able to use computer simulations so accurate that they would then preclude the need for live animal testing. However, there is little to no practical difference, that I can see, from an ethics standpoint, between a live animal and a perfect or near-perfect simulation of one. Perhaps the form of their existence is fundamentally different, but if they can both experience suffering, they both should have equal moral standing.

An odd and perhaps unintuitive view.

Link to comment
Share on other sites

Happiness and sadness in and of themselves may be givens of the human condition, but when and how they are felt is hardly a simple pre-programmed thing with a simple and consistently easily identifiable set of triggers. Small changes can affect people and how they feel and think greatly. What preprogrammed guidelines, exactly, do you propose to implement in this hypothetical artificial being to the extent that it will be possible to create intelligence? Intelligence we can recognize as such?

Generally speaking, there are things that make people happy and things that make people sad. Torture relies upon it. Sure, there are nuances, but overall, people are generally very alike.

The problem here is, I really don't know how to define intelligence. I mean, we can't even get a consistent single definition that applies across humans. There are different "types" of intelligence that we recognise. All I suppose an AI would need to be capable of doing to qualify is being able to make decisions that are not simply a result of it's programming. That it's able to learn and build on it's experiences. Having cognisance I guess. I don't realistically think there's any way we'll ever be able to test for true consciousness or anything like that.

It's ability to make decisions could be based on anything the programmer wanted it to be. Be it from the objective of killing people to the objective of standing on it's head. We all have base morals upon which we tend to base our reasoning, that's how we decide which course of action would be best in a given scenario when we actually bother to think about it. For most people they are loosely based around maximizing people's happiness (although the specifics vary a lot between people, and other people's are completely different) and these have no logical justification. They're just right because they're right (because that's what our brain chemistry tells us). You could give a computer that sort of base programming, or you could program it to think that the ultimate "moral" goal is to kill people. Either way, emotions are not necessary for the decision making process. I mean, it certainly may not act precisely like a human, but the same could be true for far distant aliens that we may well regard as intelligent.

Edit: For example, psychopaths have a problem regarding emotions in that they don't really feel them properly. I wouldn't class them as sub-human or unintelligent.

Edited by Person012345
Link to comment
Share on other sites

Indeed, persons with ASPD can be incredibly intelligent. They simply lack any ability to experience empathy. That Requires that they have different incentives to behave in accordance with the social contract, or rather not run around killing people because they feel it's a nice pastime.

In fact, I've known persons with ASPD who were quite capable of acting morally, which others who are "normal" would act like based on empathy. It just requires a logical, intellectualised argument for them to reach moral values. It's still not a good idea to run about bashing your competitors heads in with rocks because it doesn't facilitate cooperation, and a cooperative society is oft beneficial to oneself, so might as well not rock the boat, as it were. (I'm waaaay oversimplifying here.)

Edited by phoenix_ca
Link to comment
Share on other sites

Well, ASPD and psychopathy are not the same thing, although they are similar. It's not just empathy they don't feel. Often the reason that many DO run around killing people, and why they are generally very effective at being charming and good at bull****ting is because they have no fear of being caught (or much reduced fear). They often simply have trouble feeling emotions generally (although to some degree they are still present, I'm not a psychologist though so I can't be specific).

Edited by Person012345
Link to comment
Share on other sites

Point taken, though it was my understanding that "psychopath" was an out-dated term recently supplanted by the classification of ASPD. At least that's what I recall from my psychology classes. I need to go review the DSM on this again.

Edit: Ah yes, turns out the two do have differing diagnostic criteria. Yay. Learning and stuff.

Edited by phoenix_ca
Link to comment
Share on other sites

What if you have an infinitely fast and small binary computer that is running a simulation of all neurons in a person's brain? If the simulation is accurate enough, shouldn't it allow conciousness?

It depends on the computer, a 'turing machine' style single core computer would definitely be running a simulation, as it only appears conscious if it's fast. I believe that my mind would still be recognisably conscious if you slowed down the speed of everything in it. If you took a turing computer simulating a brain perfectly and looked at that running slowly then it would be obviously a mechanical simulation. I have the opinion that consciousness is a phenomenon born of an ongoing, parallel, chain reaction of many simple systems.

The key thing here is whether you believe it is possible for a simulation or something to actually become that thing if it is fast/accurate enough; I don't. However, as I mentioned, this becomes blurry when you consider multiple computers working in parallel. Again I'd say it's all about the spectrum and increasing complexity leads to higher levels of consciousness.

Unfortunately I am still undecided about a lot of the trickier questions, for example, does a being with the same number of neurons (and corresponding brain network layout) as a billion people have the same consciousness potential as a superorganism made from a billion people? A surprisingly small proportion of the brains 100 billion neurons are actually relevant for consciousness. I would think that the number of senses you have would not determine your consciousness level but then what if you had no senses at all? Does your past experience alter your level of consciousness? Is a highly educated person more conscious than someone raised by wolves?

Really we have no idea what parts of the brain lead to consciousness or what it really is, without this information we will never be able to work out what the structure of an artificial brain would be and thus no chance of achieving strong AI. Interestingly, the only way we will ever know this stuff is through a heavy dose of advanced weak AI applied to much more advanced brain imaging hardware than we have now. So in fact, it is highly likely that humans won't actually be the ones to design a strong AI device, it's going to be done by a supercomputer.

That's a rather philoshophical question. :P My thoughts on the matter would be "Yes" despite the previous poster's disinclination to respect/accept philosophical input on the matter (even if philosophy isn't the key to making AGI, it certainly can inform its creation and growth).

I've oft used that particular example myself as a counter-example to the suggestion by many animal rights activists that we should, at some point in future, be able to use computer simulations so accurate that they would then preclude the need for live animal testing. However, there is little to no practical difference, that I can see, from an ethics standpoint, between a live animal and a perfect or near-perfect simulation of one. Perhaps the form of their existence is fundamentally different, but if they can both experience suffering, they both should have equal moral standing.

An odd and perhaps unintuitive view.

Oh, I respect and accept philosophical input, it's what I'm doing right now! What I meant was that philosophers are not required to create a strong AI. All we need is science, and philosophy has no place there.

Link to comment
Share on other sites

Philosophy and science are linked at the hip. To say that philosophy has no place in science is rather mistaken. And in fact, neuroscience and philosophy inform each other greatly now. Just look at the work of Sam Harris. O.o Like him, I just don't see why there needs to be any distinction between the science and philosophy. They are both the pursuit of truth through reason and logic. Indeed, science has philosophical underpinnings.

I'll grant you that philosophers might not be necessary to create a genuine strong AI, but the ethics of creating one, living and interacting with one, the ramifications of reprogramming one when it "malfunctions" are all squarely in the realm of ethics and morality.

What I find curious is your sureness that consciousness must necessarily be born of the complex interaction of physical systems. Software programs may not be separate physical entities, but they are, at the most basic level, complex systems interacting with each other, at least in object-oriented programming. A function takes input, "sense data", and interprets it. Another function uses this interpretation, outputs the data to another function, and so on. Feed that data back upon itself, and you have the basic function of a conscious mind.

Naturally, such basic functions lack characteristics like polymorphism, and the ability to spontaneously change how they interact with other functions, but the basic structure of a computer program appears rather similar to the basic structure and function of a neuron, at least in an abstract sense.

Link to comment
Share on other sites

One way that those who invent AI's go about it wrong is that they try to make them too smart and carefully code them. Every part is designed and made exactly. This is only half of the way they should go about the operation. I have devised an idea that AI's can be made to think and feel like humans do.

Firstly, three microchip designs must be created:

-a sensory microchip that can pick up an initial signal and pass it on to a CNS

-a relay microchip that exists in the CNS, can pick up signals from sensory microchips and 'work' with its connecting microchips to process the signal

-a motor microchip that can take the processed signals and pass them on to a response function

These would then have to be organised and arranged, in their masses, like a human CNS is. A recreation of the human ear and eye could be attached to the sensory microchips and the relay microchips could be attached to a recreation of human vocal chords and a mouth. The AI would remain silent for a while, observing human conversations and learning our language. It would eventually become curious and create a noise with it's mouth. After about a year, it should have started forming basic words and sentences. After six to seven years, it may even be able to converse in full English.

If one were to trust the AI, one may wish to provide it with arms and legs. the muscles would be made of a colloid that contracts when it receives a negative charge (from the motor microchips). One may also wish to provide it with human hands and legs, so that the AI can walk and write, just as humans do. With some basic technology, we could 'make' humans.

However, what if the AI becomes corrupt. It could learn to hack into databases and download precious data. It could learn how to make loyal replicas of itself, as we had initially planned to do. It could wipe us out.

So consider this: Is science worth the risk of Humankind?

(basically, if your'e a "tl;dr" kind of person: make an electronic human CNS)

You have been warned...

Edited by Whptheenah
Link to comment
Share on other sites

Philosophy and science are linked at the hip. To say that philosophy has no place in science is rather mistaken. And in fact, neuroscience and philosophy inform each other greatly now. Just look at the work of Sam Harris. O.o Like him, I just don't see why there needs to be any distinction between the science and philosophy. They are both the pursuit of truth through reason and logic. Indeed, science has philosophical underpinnings.

Sure philosophy creates some useful things but really it is only able to offer suggestions to science. I love philosophy but I really hate that there's so little scientific method to it, valuable ideas are created but there's no mathematical style 'proof' for any of it (outside of basic logic and things that are closer to math).

I'll grant you that philosophers might not be necessary to create a genuine strong AI, but the ethics of creating one, living and interacting with one, the ramifications of reprogramming one when it "malfunctions" are all squarely in the realm of ethics and morality.

100% with you here but at the same time with no proof your ideas are correct it's likely the scientists who created the AI will disagree.

What I find curious is your sureness that consciousness must necessarily be born of the complex interaction of physical systems. Software programs may not be separate physical entities, but they are, at the most basic level, complex systems interacting with each other, at least in object-oriented programming. A function takes input, "sense data", and interprets it. Another function uses this interpretation, outputs the data to another function, and so on. Feed that data back upon itself, and you have the basic function of a conscious mind.

Software can be complex but in the end it comes down to a long line of predetermined, and simple, instructions. So it can simulate complexity but when you break it down and look at it there's nothing.

Naturally, such basic functions lack characteristics like polymorphism, and the ability to spontaneously change how they interact with other functions, but the basic structure of a computer program appears rather similar to the basic structure and function of a neuron, at least in an abstract sense.

I believe it's possible to simulate down to the finest detail if you've got the time to write the code, but really the code is just a model of the actual processes. I can't say that it's impossible to perfectly model reality because we don't yet know enough about the smallest units in the universe, it may be infinite (thus making a perfect model impossible).

Any single threaded digital computer can be visualised as a Turing Machine. This is just about as simple a system as you can get, and while my idea is not developed enough for me to prove it, I think that this cannot be considered capable of consciousness when considered alone.

Link to comment
Share on other sites

Okay, I think we have a different conception of philosophy. When I speak of philosophy, I do mean philosophy in the sense of scientific proof and truth-based pursuits. You're right if you're referring to previous conceptions of philosophy where it is separate from science, as if values and science should somehow be totally separated. But frankly, they needn't be. A lot of what scientists do today is philosophical thought on the implications of their own research.

As for proofs, I'd hardly call the field of logic "basic". It's underpinnings are based on the same essential rules of math, but it is a VERY powerful tool when applied correctly. The validity of a complex argument can be boiled-down to its constituent parts, and with that we can determine its validity, whether it is tautological, contradictory, or contingent, and even the sometimes quite helpful knowledge of whether or not the truth-value of an argument or complex sentence is contingent on a single atomic sentence or truth value. (Sometimes it's the case that, in a very complex argument, the truth value of the entire thing is always equal to the truth value of a single part of it.) It's kinda like Occam's Razor on steroids, sometimes.

At the most basic level of knowledge, the only thing anyone can know is that they exist. Cogito ergo sum, as it were. Nothing else is "proven". Philosophers have struggled with this fundamental issue for centuries. All other data about the world is subject to this fundamental uncertainty: I may very well be a brain in a tank, this entire world concocted for the benefit of some researcher, or for the entertainment of some being, who knows. I have no way of confirming or denying this. The only fundamental truth I can confirm is that I exist.

Science itself is based on fundamental assumptions/assertions. We assert that it is important to value logic, to value evidence, to be intellectually honest. The problem then becomes, what logical argument would you provide to convince someone to value logic? What evidence to convince them to value evidence? These are entirely unproven valuations, and yet, science is no less scientific for it. (Cripes I'm practically quoting Sam Harris right now.) My point is, modern philosophy is often in itself a scientific pursuit, it just happens to be one of somewhat abstract ideas, but that doesn't mean that those ideas cannot be held to the same rigor as other fields of science.

*ahem* End slight tangential rant. :P (I'm a philosophy student, but also a scientist, if that explains anything.)

I think I understand your argument that a sequential form of processes would be merely emulative of the apparently more complex model, but then, I have trouble squaring that with the reality of neurons. They are, essentially, those smaller processes that in the previous example were being executed sequentially, now being executed in parallel.

In some sense, the impulses generated by this perfect simulation could be considered analogous to neurons firing. There is merely a time delay between their interaction.

Let me ask you this: What if your brain only fired one neuron at a time, but those neurons still interacted with each other as per normal. Are you then not a conscious being, based on the premise that to be such a being your processes must be running simultaneously?

Edited by phoenix_ca
Link to comment
Share on other sites

In the end, I guess we're all just going to get so curious that we'll try my idea and kill everyone. Make small, simple neurones that each process simple bits of data and link them together in an interactive web that forms a recreation of the human CNS. At our cores, we are nothing but millions of tiny, basic processors that each process the most basic bits of data. If we link these basic processors together, however, we can create a supercomputer that looks and functions like our CNS, and therefore should be as complex and advanced as our CNS. Many of you were probably thinking the same thing too (I like this new sub-forum).

Link to comment
Share on other sites

Okay, I think we have a different conception of philosophy. When I speak of philosophy, I do mean philosophy in the sense of scientific proof and truth-based pursuits. You're right if you're referring to previous conceptions of philosophy where it is separate from science, as if values and science should somehow be totally separated. But frankly, they needn't be. A lot of what scientists do today is philosophical thought on the implications of their own research.

Philosophy uses a lot of scientific ideas gut at its core it comes down to 'what is right and what is wrong', science is more about 'why are we here and what are we?'. Science is making great progress on its question but philosophy has made very little.

As for proofs, I'd hardly call the field of logic "basic". It's underpinnings are based on the same essential rules of math, but it is a VERY powerful tool when applied correctly. The validity of a complex argument can be boiled-down to its constituent parts, and with that we can determine its validity, whether it is tautological, contradictory, or contingent, and even the sometimes quite helpful knowledge of whether or not the truth-value of an argument or complex sentence is contingent on a single atomic sentence or truth value. (Sometimes it's the case that, in a very complex argument, the truth value of the entire thing is always equal to the truth value of a single part of it.) It's kinda like Occam's Razor on steroids, sometimes.

I'm not saying logic is basic, just that typically only basic logic is used in philosophy; philosophy and mathematics are closely tied but logic is really just maths.

At the most basic level of knowledge, the only thing anyone can know is that they exist. Cogito ergo sum, as it were. Nothing else is "proven". Philosophers have struggled with this fundamental issue for centuries. All other data about the world is subject to this fundamental uncertainty: I may very well be a brain in a tank, this entire world concocted for the benefit of some researcher, or for the entertainment of some being, who knows. I have no way of confirming or denying this. The only fundamental truth I can confirm is that I exist.

Mathematics is the only thing that is proven so far, everything else is down to probabilities of truth.

Science itself is based on fundamental assumptions/assertions. We assert that it is important to value logic, to value evidence, to be intellectually honest. The problem then becomes, what logical argument would you provide to convince someone to value logic? What evidence to convince them to value evidence? These are entirely unproven valuations, and yet, science is no less scientific for it. (Cripes I'm practically quoting Sam Harris right now.) My point is, modern philosophy is often in itself a scientific pursuit, it just happens to be one of somewhat abstract ideas, but that doesn't mean that those ideas cannot be held to the same rigor as other fields of science.

The answers to those questions are down to philosophy and there is no excuse for having made no progress at all towards consensus in our 100 thousand year existence. I'm not saying there aren't potential answers out there, just that philosophers generally view the idea of having to prove any of their ideas to anyone as a waste of time. I believe it's reasonable to expect proof, using irrefutable logic, but obviously I'm a minority. Personally I find it hilarious that people can devote their entire lives to something they honestly believe is completely futile.


This philosophy discussion is getting a bit off topic but I still want to continue it, I can split this off into a new topic if required.

I think I understand your argument that a sequential form of processes would be merely emulative of the apparently more complex model, but then, I have trouble squaring that with the reality of neurons. They are, essentially, those smaller processes that in the previous example were being executed sequentially, now being executed in parallel.
In some sense, the impulses generated by this perfect simulation could be considered analogous to neurons firing. There is merely a time delay between their interaction.

Yes they're parallel and independent. At this point I can't really go much further, I really have no idea what consciousness is, or how valuable it is, or if it serves any purpose. At the end, my ideas come down to believing that my consciousness is due to the 'fire' like chain reaction of my interacting brain cells, and that if you could somehow shut down everything in my head, and then restart it in exactly the same state, so it continued as if nothing had happened, I would have died and then when it restarted another identical consciousness would have been created; just like how 'teleporting' by recreation at the other end would kill you, or how creating an identical clone of yourself does not mean that you would survive if your original body was killed. This is relevant because that 'freezing' is essentially what is happening every time that single thread computer finishes an instruction. When taken alone, a single instruction like that is certainly not a conscious entity even for a brief moment.

Let me ask you this: What if your brain only fired one neuron at a time, but those neurons still interacted with each other as per normal. Are you then not a conscious being, based on the premise that to be such a being your processes must be running simultaneously?

I think the response above answers this but let me know if there's something I missed.

Edited by Foamy
Link to comment
Share on other sites

Methinks I see what you're getting at now regarding consciousness, though that may or may not be a semantic debate (though it certainly is a philosophical one).

As for philosophy, yes, I agree the rate of progress is rather appalling, though the field was hobbled much in the same way as a science was over the millenia, and it is still currently disadvantaged from making much real progress by the false division of it from science. A good philosophical mind is also a scientific one. I don't have time right now to expand on this, but I'll leave you with

(and perhaps humbly suggest you pick-up his book The Moral Landscape as well, though the lecture is a good summation of the general ideas).
Link to comment
Share on other sites

Even if we could make a true intelligence, how could we tell if it is real or if it is just copying behavior?

Why would we need to?

Some science fiction authors I've been reading lately (Charles Stross?) have made the interesting point that just as aircraft are not structured like birds despite performing similar functions,

Not the best metaphor given that a lot of the reason aircraft differ from birds is the same as the reason birds differ from flying insects - different effects prevail at different scales.

In addition, our intelligence is adapted to operating a human body, interacting with a human world and interacting with other humans. An AI, grown organically would obviously develop a very different intelligence for the world it finds itself in.

And of the millions of species on the planet that are born and develop how many exhibit human level intelligence?

The 'let it grow' approach seems very unlikely to work as a method for finding out how to make an AI. It might be the best way to create variability in AIs once we know how to make them, but as a research method?

There's no complexity in a binary computer, the instructions it performs on the code are simple, it just does them quickly. So rather than lots of simple components working together to create a complex system, we have one simple component running extremely quickly to create the appearance of a complex system.

CPUs contain approximately a billion simple transistors working together at the same time, each of which are made of simpler chemical/electrical interactions. Sounds pretty similar to your description to me.

And for the theory that an infinite computer cannot model consciousness to be plausible I'd like to see at least one other completely separate problem it can't model. If it's only AI researchers making them claim then it does seem suspicious.

Personally I hold the view that AI is relatively straightforward to do (similar to any other big project) and can be done in a variety of ways on lots of different types of hardware and that the main problem is that we haven't the faintest clue where to start looking. I suspect we'll see the answers emerging from neuroscience rather than philosophers or AI researchers.

The theory that X is special has so often not held up in the past, if it doubt assume we simply don't know how to look at it.

Link to comment
Share on other sites

Well, Yes, emotion gets in the way of rational thought, and Yes, there is the question of whether an AI would actualy be self-aware, at least, as a person is supposed to be. But, well, honestly? I think there is an easy solution.

A being that is self-aware should dream.

An AI that is shut into, say, sleep mode, should assimilate information that is has gathered when it is 'awake', so that it can access the information faster, and understand what the information is better, just like us meat-puppets. The brain works (as I understand it) as a super-computer that is constantly self-checking and self-evaluating possible outcomes and situations that could happen- and that traint is called our imagination.

Therefore, an AI should be able to dream, and imagine (in the act of imagining, create).

If there is a comupter that can do that, then it should be relitively simple to create an AI.

I hope we figure out a way to do it, because, frankly, computers are getting almost complicated enough for us to try to create a self-aware system. It will cause some religeous issues though, but I think it would be worth it.

Link to comment
Share on other sites

Again, AIs can be self aware. All self aware beings are merely masses of simple processors linked in a huge CNS grid. If we humans can replicate this, we can create not only a self aware AI with superior intellect, but also one with a conscience and emotions. These last two things are the most important things to give the AI. If the AI has a superior intellect to humans and is self aware, but it lacks feelings of guilt and sympathy, it would most likely be the end of the world as we know it.

And for the comment above, I highly doubt that it would create religious issues as we would not be harming LIFE for the tests. If the AI feels pain, we can disassemble it without actually killing anything recognized as life by any religion.

Edited by Whptheenah
Link to comment
Share on other sites

A computer does what it does by doing one thing at a time, in a pre-determined sequence, but incredibly quickly, in digital.

A brain does what it does by doing lots separate and seemingly totally un-connected things all at the same time, in analogue (although slower, individually).

Both systems rely on feedback from the result of these processes to influence or generate the next set of processes, although in a brain, due to the much larger area of influence, the results can prove somewhat unpredictable. (perception of choice and all that, but don’t want to get into one of those discussions)

So yeah, maybe a network of a few billion tiny analogue computers would do the trick for a true machine intelligence (I don’t like using the term ‘artificial’) so long as you could figure out a way of programming it to start the thing working in the first place.

Link to comment
Share on other sites

If the AI has a superior intellect to humans and is self aware, but it lacks feelings of guilt and sympathy, it would most likely be the end of the world as we know it.

Why would it bother? Something without "feelings" wouldn't "feel" like wiping us out(not that it would have any means to do so at it's disposal if it did feel like doing it), or "feel" like saving itself from destruction. Even if it had the equivalent intelligence of all the greatest minds of history rolled into one, it would just sit there, and do absolutely nothing. Not all that useful a creation.

Another note, if it did what you asked it to do, because it's programmed to do it, and it's not even thinking in the mean-time, because it doesn't feel like thinking, then i wouldn't call that sentience, would you?

Edited by Naiba
Link to comment
Share on other sites

Why would it bother? Something without "feelings" wouldn't "feel" like wiping us out(not that it would have any means to do so at it's disposal if it did feel like doing it), or "feel" like saving itself from destruction. Even if it had the equivalent intelligence of all the greatest minds of history rolled into one, it would just sit there, and do absolutely nothing. Not all that useful a creation.

Another note, if it did what you asked it to do, because it's programmed to do it, and it's not even thinking in the mean-time, because it doesn't feel like thinking, then i wouldn't call that sentience, would you?

I'm saying that if we program an AI that acts like an animal, but without emotion, then (like an animal) it would feel compelled to out-survive the rest of the population. This would drive it to kill other organisms without a second thought. Therefore, it would be the end of life as we know it.

Link to comment
Share on other sites

A computer does what it does by doing one thing at a time, in a pre-determined sequence, but incredibly quickly, in digital.

A brain does what it does by doing lots separate and seemingly totally un-connected things all at the same time, in analogue (although slower, individually)

I get my internet through fibre optical cables which work by lots of on/off flashes of light.

A friend only has available the analogue radio waves of the 3G mobile phone network, clearly it's inconceivable he'll ever be able to get the internet.

That's not to say that intelligence is independent of it's hardware but I would assume these sorts of things are engineering challenges that will require solving to make it practical (either by building massively connected silicon chips or inventing a new technology) rather than a fundamental barrier to understanding it on even the smallest scale.

What we're lacking right now is an idea of the algorithms and similar in use. If you want to recreate the web you need to learn HTML, CSS and HTTP - the details of how fibre optics work are irrelevant and any means of conveying the higher level data will work.

Link to comment
Share on other sites

Artificial intelligence, much like humans, is an interesting subject not to be dealt with in normal context. Some may say a true AI would be wrong morally, for similar reasons as to why they say it is wrong to adjust humans in ways I do not remember the words to describe of. It is strange, and the mind is so complex that not 1 human has figured out exactly how emotions and feelings work. I am not sure one could do such things. With evidence of the subconscious mind telling futures, to you this would be your gut feeling. Perhaps emotions are elsewhere. For some reason my gut feelings are telling me not to finish this, so I shall not continue... sorry... do not understand...

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...