Jump to content

Artificial Intelligence: Slavery?


Helix935

Recommended Posts

Why are we giving them souls? I mean, that just sounds inefficient.

An AI would:

Take a lot of computing power

Be very expensive to create.

Be expensive to run (Electricity costs are high when you're dealing with the kind of power they would need)

Be devoting a lot of its processing power to things not its job.

A stupider program would:

Take a lot less computing power

Be cheaper

Use less electricity to run

Be devoting maximum attention towards its job.

As an evil greedy profit-focused corporation, which option would I take?

After all easy jobs are taken by "stupider programs", corporations can raise their profit by replacing the hard jobs (that took human intelligence) with cheaper-than-human AI. As long as there are jobs left that humans can fill, there will be opportunity to safe labor costs by replacing them with better and better AI.

The only point were you could argue that smarter AI aren't profitable anymore, is in a completely humanless economy. And even there, even smarter AI may be more cost effective for some tasks.

An economy can be completely humanless, the driving force of the economy doesn't need to be consumtion by humans. As long as their is space with resources to expand into, an economy consisting of automated corporations could fill the galaxy.

Edited by N_las
Link to comment
Share on other sites

Really it is going to be inevitable that someone designs an AI (or MI if you care about the definitions :D ) and the general agreement amongst scientists is that rather than try to inhibit something that you have no ability to detect is being researched, research it and establish some sort of guidelines for it. Hopefully by the time someone intentionally designs an AI to go skynet on us, we have several other AI systems in place that exist to counter the skynet one.

To make my point about the research. Nuclear weapons are easy to see someone is using them. The US has all kinds of sensors to use. Seismic, radiation sensing, thermal, etc. It is impossible to detonate a nuke on or near Earth without the US knowing about it. Fun fact, this system detected something approaching 12 1+ kiloton scale explosions caused by meteors impacting the Earth in the last DECADE. Now, given that we have this ability to tell you are testing a fullscale weapon, and given how obvious it tends to be that you are researching (specialized parts, radiation signatures, etc), you'd think we'd be pretty good at keeping people from researching it. Well, just look at the IAEA and see how largely ineffective they are.

Now imagine you have an agency tasked with fighting AI research. The ONLY sign you have that someone is working on this is them constructing a large server farm. Something that can be used for thousands of purposes and in order to hide your research, you just wipe the on-site hard drives and replace them with something like weather modeling or nuclear warhead simulations. No evidence left behind.

The consensus is that AI/MI is inevitable, so what we need to do is work on setting up a guideline or code (code of ethics) for researchers. You can have a system designed for war that has backups to prevent skynet-esque crazy.

Link to comment
Share on other sites

"

As an evil greedy profit-focused corporation, which option would I take?"

There are states, there are billonaires. Anyway, I think that for to get something "sentient" we need to build something that could understand a language (there is a webpage with a IA wich we can discuss with, don't it ? ) and then let it learn by itself on wikipédia. Netherless, that just make a parrot, but then, using the data it have, we can give it order ("solve this, solve that,...") and by reducing or augmenting the amount of computationnal power it could have, we can recompense it, and gradually making him "sentient". I don't get why it would be interessant, tough.

Link to comment
Share on other sites

Why are we giving them souls? I mean, that just sounds inefficient.

In short, to see if we can.

In long, because if we can create an intelligence from scratch, then we can use our knowledge from that to help understand our own. Yes they operate under wildly different formats but the AI/MI would be designed to mimic us as best we can make, so even if we have to fill in some gaps in our theories in order to make the AI/MI, then we can utilize those "fillers" in order to test and see if that is how it works for humans.

Link to comment
Share on other sites

No, I just want to know what your definition of intelligence is.

Sorry then, it sounds like I was reading too much into your question. It sounded to me like you were trying to open up the beginning of a (IMO) frivolous debate, which is why I responded the way I did.

Link to comment
Share on other sites

Why are we giving them souls? I mean, that just sounds inefficient.

An AI would:

Take a lot of computing power

Be very expensive to create.

Be expensive to run (Electricity costs are high when you're dealing with the kind of power they would need)

Be devoting a lot of its processing power to things not its job.

A stupider program would:

Take a lot less computing power

Be cheaper

Use less electricity to run

Be devoting maximum attention towards its job.

As an evil greedy profit-focused corporation, which option would I take?

Why would we give them "souls"? Well, it might be important for an MI to make moral decisions in some cases, whether by exigency or by designed purpose. I also think humans have a hunger to create, and a loneliness that desires contact with alien minds. We'd create sapient, sentient, and moral MIs because we want them.

I'd think we'd also have MIs that were more job-focused. Kinda like slaves, except that they WANT to do their work, because they get enjoyment out of doing the best possible job of it. If we felt guilty about their rights, they could possibly be offered a programming change.

In the end, how are humans a lot different? Most humans get stuck doing jobs they don't like; the lucky few who have a job they like- did they choose what they like? No- I know what I like, but I have essentially zero control over that. So a machine intelligence that could be re-programmed to like different jobs would be luckier than us, in fact.

Anyway, I think it might be important to have a large number of free, moral, life-respecting machine intelligences. The machine community could thus be self-policing, and would weed out and re-program (if possible) those individuals who showed harmful tendencies. The only real counter to an evil super intelligence is a bunch of good super intelligences.

I guess I must not be all optimistic-however. Of course, once you we have free super intelligent machines, we lose control of any ability to control them. We'd have to hope that they did not evolve over time into a society that did not respect life. I don't think that would be something we'd really have to worry about- biological life and machine life can survive in entirely different environments, and there's enough resources to go around for everyone in the solar system.

Link to comment
Share on other sites

An evil corportation will never build an Artificial General Inteligence, as more specialized automation is cheaper and better for most tasks.

However, if an Artificial General Inteligence is possible, it WILL be built... by research teams, to prove it can be done. Doesnt matter if it's as useful as a blind kid with downs syndrome- they'll find a "lighthouse for the blind" type program to keep it going, to study it.

Link to comment
Share on other sites

@The people saying "because we can":

But then those AIs wouldn't be doing jobs, and this discussion is about AIs doing jobs against their will.

@Velocity

Why does one need a soul to create a moral decision? If we can quantify the soul, surely we can quantify morals.

After all easy jobs are taken by "stupider programs", corporations can raise their profit by replacing the hard jobs (that took human intelligence) with cheaper-than-human AI. As long as there are jobs left that humans can fill, there will be opportunity to safe labor costs by replacing them with better and better AI.

The only point were you could argue that smarter AI aren't profitable anymore, is in a completely humanless economy. And even there, even smarter AI may be more cost effective for some tasks.

An economy can be completely humanless, the driving force of the economy doesn't need to be consumtion by humans. As long as their is space with resources to expand into, an economy consisting of automated corporations could fill the galaxy.

But what jobs actually truly require human intelligence? If we can program something with a soul, surely we can program something with only an individual piece of a "soul" rather than the entire thing. Sentience is inefficiency. It means that part of your brain is doing something else. Everything unachievable by computers at the moment which would be achievable by an AI would probably be achievable by a lesser program, in fact, it would probably be achieved before an AI, as a building block to an AI.

Link to comment
Share on other sites

Forgive me for nit-picking, but in the US at least, that isn't entirely correct. Such legal terms here apply to "legal persons", rather than "human beings". And there are non-humans, particularly corporations, that have acquired an amazing list of those rights over the past century.

I don't know how one would enslave a corporation, but if someone did, I'd bet on judges ruling it violated the 13th amendment.

In fact, my advice to a sentient AI who wanted to be free would be to first incorporate itself, then apply for a patent on itself. Those tiny steps provide a lot of protection for under $500.

Humans make up corporations but I see where you are going with that. Love your idea on the patent, very clever!

Link to comment
Share on other sites

With regard to the OP about synthetic intelligent agents being slaves programmed to take pleasure from their jobs, how is this different from training dogs to sniff out explosives or to lead blind people about?

Well, I'm just wondering, how that is any different from humans and our feelings, which is more or less animal instinct given a nicer terminology. I'm not even sure we humans exercise free will in more than ie. 5 percent of the time.

Link to comment
Share on other sites

Why does one need a soul to create a moral decision? If we can quantify the soul, surely we can quantify morals.

I'd personally advice against using the word soul in this context. It's a word that's pretty heavily loaded with subjectivity and values and in order to use it to convey any meaningful ideas you'd have to construct an exact definition for it first. And by doing that you'd probably contradict almost every other person's view of what soul is.

In any case, quantifying morality and ethics has been done in many different ways by a vast number of people. Some of the theories have had limited success in describing how people behave in very spesific settings, such as for example consumer behaviour. But on a more general level the problem with quantifying morality is that it always leads back to the people who are defining the values in the first place.

So to some small extent you can quantify your own morality and you can quantify the morality of people in general or any other subgroup. But since morality in itself is a subjective and largely qualitative concept, you can only quantify it subjectively and in retrospect. What I mean by this is that starting from a completely "blank person", you don't first arrive at certain values, then apply them to your own thinking and then reach a moral decision. What happens is that you first develop your own concept of morality, then quantify it and then you can in theory use that quantification to predict your own decision in a given moral problem. If you don't have morality in the first place, you cannot assign values you need to quantify it.

So the problem here is that you can program a synthetic mind to behave according to some moral standards. But the values of morality are actually chosen by you and the quantification process is also creator-dependant. So what it ends up representing is your idea of a system which quantifies your idea of moral values. But now the machine is deprived of free will as you're hardcoding it to "like" and "dislike" some actions more than the others. And now we're already pretty deep in the territory where the machine is pretty much just an automaton pretending to me a moral subject. So you can call this morality but I don't see how it even remotely resembles the morality the humans possess.

Link to comment
Share on other sites

"So the problem here is that you can program a synthetic mind to behave according to some moral standards. But the values of morality are actually chosen by you and the quantification process is also creator-dependant. So what it ends up representing is your idea of a system which quantifies your idea of moral values. But now the machine is deprived of free will as you're hardcoding it to "like" and "dislike" some actions more than the others. And now we're already pretty deep in the territory where the machine is pretty much just an automaton pretending to me a moral subject. So you can call this morality but I don't see how it even remotely resembles the morality the humans possess."

Well, we, humans, are already hardcoded. For reproduction and survival for exemple. That don't stop us to be "sentient". Hard coding is mandatory, but there are already programs wich can modify themselve, thus evolving. And in bonus our entire universe is "hardcoded".

Link to comment
Share on other sites

Forgive me for nit-picking, but in the US at least, that isn't entirely correct. Such legal terms here apply to "legal persons", rather than "human beings". And there are non-humans, particularly corporations, that have acquired an amazing list of those rights over the past century.

I don't know how one would enslave a corporation, but if someone did, I'd bet on judges ruling it violated the 13th amendment.

In fact, my advice to a sentient AI who wanted to be free would be to first incorporate itself, then apply for a patent on itself. Those tiny steps provide a lot of protection for under $500.

An cooperation is a legal construct who predates the US, without it you would only have businesses with one owner. And yes an cooperation has to have an owner you can not "free" it.

The reason is that an company is an legal person is that in most business or legal cases its treated the same way as an person, it can borrow money, own things, sign contracts and other business related stuff, it can also be sued or even charged and have to pay fines.

Without cooperations as legal persons you would need an single owner who would have full responsibility for everything the company did, creditors wold go after him, he would also have full practical control say it was a shipping company the owner could sell all the boats and this would be legal, yes the investors could go after him to get the money back but this would not help them if he moved to another country.

Link to comment
Share on other sites

Well, we, humans, are already hardcoded. For reproduction and survival for exemple. That don't stop us to be "sentient". Hard coding is mandatory, but there are already programs wich can modify themselve, thus evolving. And in bonus our entire universe is "hardcoded".

Of course, although survival and reproduction aren't moral issues, they're just desires like the craving for sugar and fat. Though they do turn very quickly into moral issues so I'm just poking at the semantics a bit. Then again most of this whole issue is pure semantics. But yeah in a sense we have also hardcoded moral values through emotions and culturally adopted norms, like freedom of speech for example. And I'm on the same boat with 78stonewobble above that I don't think humans really exercise free will in most of their actions, at least excplicitly.

But the main difference here is that human moral system isn't predetermined by a set of values and then forced to act based on those values, it's rather re-evaluated on the spot when making moral decisions and often behaves in highly illogical and emotional way. If you program a machine to make decisions based on some quantified set on values, it doesn't get that choise. You can let it modify itself but if you treat morality as quantifiable, you're implicitly denying irrationality from it.

Link to comment
Share on other sites

I'd personally advice against using the word soul in this context. It's a word that's pretty heavily loaded with subjectivity and values and in order to use it to convey any meaningful ideas you'd have to construct an exact definition for it first. And by doing that you'd probably contradict almost every other person's view of what soul is.

In any case, quantifying morality and ethics has been done in many different ways by a vast number of people. Some of the theories have had limited success in describing how people behave in very spesific settings, such as for example consumer behaviour. But on a more general level the problem with quantifying morality is that it always leads back to the people who are defining the values in the first place.

So to some small extent you can quantify your own morality and you can quantify the morality of people in general or any other subgroup. But since morality in itself is a subjective and largely qualitative concept, you can only quantify it subjectively and in retrospect. What I mean by this is that starting from a completely "blank person", you don't first arrive at certain values, then apply them to your own thinking and then reach a moral decision. What happens is that you first develop your own concept of morality, then quantify it and then you can in theory use that quantification to predict your own decision in a given moral problem. If you don't have morality in the first place, you cannot assign values you need to quantify it.

So the problem here is that you can program a synthetic mind to behave according to some moral standards. But the values of morality are actually chosen by you and the quantification process is also creator-dependant. So what it ends up representing is your idea of a system which quantifies your idea of moral values. But now the machine is deprived of free will as you're hardcoding it to "like" and "dislike" some actions more than the others. And now we're already pretty deep in the territory where the machine is pretty much just an automaton pretending to me a moral subject. So you can call this morality but I don't see how it even remotely resembles the morality the humans possess.

Very well then, replace all instances of "a soul" with "sentience". It makes no difference.

What is the advantage of a thinking sentient AI over an automaton which merely executes the morality of a human? We know that the latter will more closely follow accepted ethics than the former, the latter is cheaper, and the latter is easier to control. What advantage does the former possess?

Link to comment
Share on other sites

Very well then, replace all instances of "a soul" with "sentience". It makes no difference.

What is the advantage of a thinking sentient AI over an automaton which merely executes the morality of a human? We know that the latter will more closely follow accepted ethics than the former, the latter is cheaper, and the latter is easier to control. What advantage does the former possess?

From strictly utilitaristic point of view, not much really unless you want something like a robot friend or artist in which case those things could matter. But the automaton executes morality as defined by it's creator, the free thinking intelligence executes morality as defined by itself. So from the original point of view in this thread it makes a world of difference. I personally wouldn't allow anyone to pull the plug on a free thinking machine intelligence if it was advanced enough that I would consider it self-aware, morally acting and sentient creature. For the automaton, who cares? It's a program.

Link to comment
Share on other sites

Well, I'm just wondering, how that is any different from humans and our feelings, which is more or less animal instinct given a nicer terminology. I'm not even sure we humans exercise free will in more than ie. 5 percent of the time.
...

Just to be clear: you guys do understand that no being who has a mind which is dictated by physicality can possibly have free will, right? If everything in our brain follows the laws of physics, then there is truly no such thing as free will. Our brains make a physical calculation using neurons, and the result of which is our thoughts or actions. We're a slave to the rules of the calculation.

True, quantum mechanics may occasionally play with the results- our brains are probably highly chaotic systems where it doesn't take long for Heisenberg uncertainty to be amplified to a macroscopic scale. That might be even more true for tough decisions. Who knows, it could be that the brain actually uses Heisenberg uncertainty to help randomize some of our choices- random behavior can be a survival advantage because it makes it harder for a predator to predict your behavior. But Heisenberg uncertainty is not free will- the fact that we have no control over uncertainty is precisely WHY it's uncertainty. That said, there is no evidence as of yet that the brain deliberately uses any quantum effects in so much as those effects are used as part of a decision making/thinking process.

Anyway, off from the quantum mechanics tangent, there cannot possibly be free will without supernatural effects. I'm not making a judgement as to whether those supernatural effects exist or not. I just want to make sure you guys understand that when you talk of humans having free will, you are implying that there must be an immaterial, supernatural soul that affects our decisions. Free will == supernatural soul.

If there are no supernatural effects in our brains, then there is absolutely no reason at all that machines can't have the same illusion of free will that we have- so they would have just as much free will as we do (zero, but with the illusion of free will existing).

So the problem here is that you can program a synthetic mind to behave according to some moral standards. But the values of morality are actually chosen by you and the quantification process is also creator-dependant. So what it ends up representing is your idea of a system which quantifies your idea of moral values. But now the machine is deprived of free will as you're hardcoding it to "like" and "dislike" some actions more than the others. And now we're already pretty deep in the territory where the machine is pretty much just an automaton pretending to me a moral subject. So you can call this morality but I don't see how it even remotely resembles the morality the humans possess.

Gee, I don't remember anyone asking ME what moral standards I'd like to follow. They were forced upon me by what society deems acceptable, and by what my brain finds acceptable. I didn't choose the programming of my brain. We've seen evidence that many non-human animals have at least a rudimentary sense of right and wrong, and the more social and intelligent an animal is, the more moralistic behavior we typically observe in them. Thus, it seems most likely- and there is quite a bit of evidence for this- that the human brain was programmed by natural selection for moral behavior and a sense of morals. Without moral behavior, it is much, much harder to build a community, a tribe, a society. Moral behavior allowed our early hominid ancestors to have a survival advantage.

So we're programmed by evolution and society to have a sense of morals. Our free will is just as illusionary as any machine's would be.

I don't see any way to possibly disagree with this except if you believe in an supernatural, immaterial soul. If you DO, then it's OK to say so and I will grant that from your point of view, I can be incorrect. But if you don't, then how can you possibly disagree with the above?

BTW, if an immaterial, supernatural soul exists, it can be proven. It's the one intersection of the physical world and the supernatural world that would have to be happening in at least seven billion places right this second around the world, more if you're not a human elitist and you believe that animals have souls too. All we would have to do is look at the brain closely enough, and we would see the laws of physics breaking down, thus scientifically proving that souls exist. (To those who think it's a contradiction in terms to think of scientifically proving the supernatural, the supernatural universe COULD in fact logically exist; it would have a similar relationship to our universe as our universe does to a simulated universe on a computer. We can make "supernatural" things occur in a simulated universe, even while all objects in that simulated universe must follow the physical laws of the simulated universe. I'm not saying that the existence of the supernatural would imply that our universe is a simulated universe, I am only saying that the relationship would be similar.) Anyway, I digress.

Edited by |Velocity|
Link to comment
Share on other sites

Confining any mortal being that can understand (knowledge is not understanding) that the universe is more than the self to any labour with threat of direct, harsh consequence by another being capable of understanding the suffering of its tenant is wrong for certainty. Whether I being understands rather than knows is the hard part. Also, although it's wrong, sometimes the alternative is worse.

Link to comment
Share on other sites

From strictly utilitaristic point of view, not much really unless you want something like a robot friend or artist in which case those things could matter. But the automaton executes morality as defined by it's creator, the free thinking intelligence executes morality as defined by itself. So from the original point of view in this thread it makes a world of difference. I personally wouldn't allow anyone to pull the plug on a free thinking machine intelligence if it was advanced enough that I would consider it self-aware, morally acting and sentient creature. For the automaton, who cares? It's a program.

But this thread is talking about whether or not forcing a sentient program to work is slavery. My point is that anything a sentient program can do an automoton could do cheaper, and thus there would be no enslavement of sentient programs because there would be no sentient programs working.

In the cases of robofriend and robo-artist, you can't force a sentient thing to be a friend, and you can't force a sentient thing to be an artist of the kind which would require sentience, so there's still no issue.

Link to comment
Share on other sites

Just to be clear: you guys do understand that no being who has a mind which is dictated by physicality can possibly have free will, right? If everything in our brain follows the laws of physics, then there is truly no such thing as free will. Our brains make a physical calculation using neurons, and the result of which is our thoughts or actions. We're a slave to the rules of the calculation.

I'm not gonna get into this discussion any more than a surface scratch but just to point out there is no current consensus on either free will or what consciousness is and how it's formed so you can't really just smack around your view of people not having free will like it's a universal fact. Or if you can point this consensus to me, I'll gladly admit I'm wrong about it.

Secondly a chaotic system does not necessarily have to rely on any sort of quantum mechanical randomness in order to be chaotic. Even purely deterministic systems can be chaotic and on the other hand a system can also be intederministic without being supernatural. But everything that happens in our brain is governed at least partly by biochemical processes, which are a collection of statistical effects by countless molecules which in turn are governed by individually by quantum mechanics so yes, on the individual cell level there is total randomness involved everywhere.

Gee, I don't remember anyone asking ME what moral standards I'd like to follow. They were forced upon me by what society deems acceptable, and by what my brain finds acceptable. I didn't choose the programming of my brain. We've seen evidence that many non-human animals have at least a rudimentary sense of right and wrong, and the more social and intelligent an animal is, the more moralistic behavior we typically observe in them. Thus, it seems most likely- and there is quite a bit of evidence for this- that the human brain was programmed by natural selection for moral behavior and a sense of morals. Without moral behavior, it is much, much harder to build a community, a tribe, a society. Moral behavior allowed our early hominid ancestors to have a survival advantage.

Have you not even once in your life sat down and thought about an opinion you have or why you feel a certain way? And then realized that it's something you've never thought about but now that you actually think it through, you feel different about it? You have literally never ever ever in your life changed how you feel about things? Maybe I'm some sort of superhuman but I do that on a daily basis. I constantly re-evaluate my moral values and every now and then realize I've been stupid and childish about some things and that feeling essentially rewrites a part of my brain. Result is a change in my moral standards.

Link to comment
Share on other sites

Just to be clear: you guys do understand that no being who has a mind which is dictated by physicality can possibly have free will, right? If everything in our brain follows the laws of physics, then there is truly no such thing as free will. Our brains make a physical calculation using neurons, and the result of which is our thoughts or actions. We're a slave to the rules of the calculation.

True, quantum mechanics may occasionally play with the results- our brains are probably highly chaotic systems where it doesn't take long for Heisenberg uncertainty to be amplified to a macroscopic scale. That might be even more true for tough decisions. Who knows, it could be that the brain actually uses Heisenberg uncertainty to help randomize some of our choices- random behavior can be a survival advantage because it makes it harder for a predator to predict your behavior. But Heisenberg uncertainty is not free will- the fact that we have no control over uncertainty is precisely WHY it's uncertainty. That said, there is no evidence as of yet that the brain deliberately uses any quantum effects in so much as those effects are used as part of a decision making/thinking process.

Anyway, off from the quantum mechanics tangent, there cannot possibly be free will without supernatural effects. I'm not making a judgement as to whether those supernatural effects exist or not. I just want to make sure you guys understand that when you talk of humans having free will, you are implying that there must be an immaterial, supernatural soul that affects our decisions. Free will == supernatural soul.

If there are no supernatural effects in our brains, then there is absolutely no reason at all that machines can't have the same illusion of free will that we have- so they would have just as much free will as we do (zero, but with the illusion of free will existing).

You could say that "Free will" is a construct of irrationality within decisionmaking loops. When you "decide" to do something, a highly evolved analitical device weighs the current situation against it's database of life experience, instincts, and personal disposition and emotional state, taking shortcuts to improve reaction time. (as ancesters who thought evertything through completely tended to get eaten by any "New Data" with fangs) Given how many of those shortcuts people share, this can create the "people are sheep" illusion, and bypassing those shortcuts to make a decision (good or bad) can be thought of as excercising free will.

Edited by Rakaydos
Link to comment
Share on other sites

I'm not gonna get into this discussion any more than a surface scratch but just to point out there is no current consensus on either free will or what consciousness is and how it's formed so you can't really just smack around your view of people not having free will like it's a universal fact. Or if you can point this consensus to me, I'll gladly admit I'm wrong about it.

You're probably right in that there is not standardized way to define it. For the purposes of this discussion, free will is a decision made that is NOT the result of entirely physical processes. Otherwise, the decision was made by deterministic or random processes. Neither determinism nor randomness have room for purpose or will. So by essence, by this definition free will MUST then be supernatural. In addition, YOU'RE the one who (indirectly) defined/implied free will here as being something supernatural first, not me. You did this by saying that a machine cannot have free will. The implication of the supernatural comes about quite clearly:

Machines are limited by the same physical laws that limit us. If no machine can ever have free will, then either

a) We cannot have free will either because we're bound by the same laws as the machine;

B) We have free will which is granted to us by a supernatural power that we could never incorporate into a machine.

If our brains are entirely bound by the physical laws of nature, then every thought we think and every decision we make is just a result of a complex interaction of physical laws. We did not choose those physical laws or how our brain is wired; we in fact choose nothing. Hence, we have an illusion of free will, but in fact, all our decisions were determined by laws of nature beyond our control.

Secondly a chaotic system does not necessarily have to rely on any sort of quantum mechanical randomness in order to be chaotic. Even purely deterministic systems can be chaotic and on the other hand a system can also be intederministic without being supernatural. But everything that happens in our brain is governed at least partly by biochemical processes, which are a collection of statistical effects by countless molecules which in turn are governed by individually by quantum mechanics so yes, on the individual cell level there is total randomness involved everywhere.

In my understanding, the length of time for which any chaotic system can be predicted is limited by the precision with which you can measure its state, and by how chaotic the system is (the more chaotic, the faster it approaches unpredictability for some given amount of initial measurement precision). Does not quantum mechanics provide the ultimate precision limit? Thus a truly chaotic system will eventually amplify quantum mechanical effects to a macroscopic scale.

I was told recently by a researcher in chaotic systems that someone once did a calculation/estimate about how long it would take for quantum uncertainty to have a measurable impact on weather, and the result was "surprisingly short" (he didn't remember the actual time scale though). Maybe I should look for the paper, if it was presented in one.

Anyway, our measurement capabilities and ability to model complex systems are not yet powerful enough for us to reach the quantum uncertainty bound anyway, at least for any systems I know of... but I'm no expert in the field. I suppose we might be able to make such a system though.

Have you not even once in your life sat down and thought about an opinion you have or why you feel a certain way? And then realized that it's something you've never thought about but now that you actually think it through, you feel different about it? You have literally never ever ever in your life changed how you feel about things? Maybe I'm some sort of superhuman but I do that on a daily basis. I constantly re-evaluate my moral values and every now and then realize I've been stupid and childish about some things and that feeling essentially rewrites a part of my brain. Result is a change in my moral standards.

I flip flop my opinions daily based on the latest information I receive. But that doesn't require some supernatural free will that couldn't be replicated in a machine. In fact, if there is no soul directing our thoughts and actions, we are nothing but chemical-electrical machines ourselves. If machines can't have free will, then either we can't either, OR our assumption about the brain being entirely bound by physical laws is false and we have supernatural souls.

If you can't tell already, I personally find it very unlikely for us to have souls in the supernatural sense; thus it follows that there must exist some configuration of computational components that would exactly replicate all aspects of the human mind and human experience. Thus it follows that machines CAN have free will, at least, just as much as we do- and that is true regardless of what your definition of free will is.

Does that mean that any intelligent machine would have the same illusion of free will that we do? Of course not. Does that mean that any intelligent machine would be as capable of self-evaluation as we are? Of course not. But some machines could. Some machines might even appear to have MORE free will than we do.

Edited by |Velocity|
Link to comment
Share on other sites

Secondly a chaotic system does not necessarily have to rely on any sort of quantum mechanical randomness in order to be chaotic. Even purely deterministic systems can be chaotic and on the other hand a system can also be intederministic without being supernatural. But everything that happens in our brain is governed at least partly by biochemical processes, which are a collection of statistical effects by countless molecules which in turn are governed by individually by quantum mechanics so yes, on the individual cell level there is total randomness involved everywhere.

This is only obviously true on a macro scale (and it may well be that's all that matters.) But classical mechanics (including general relativity) suggests a perfectly deterministic universe, and there are interpretations of quantum mechanics (e.g. many-worlds) that do the same.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...