Jump to content

Karla

Members
  • Posts

    27
  • Joined

  • Last visited

Reputation

10 Good

Profile Information

  • About me
    Rocketeer
  1. It does not make sense to talk about being slave to your brain or physics. You are your brain. It's like saying that a table is slave to being a table because it has legs and a flat top. It's a meaningless statement that adds no information. So if I am a Nobel prize winning computer scientist the achievement would belong to me because I am my brain. Without my brain there is no me. My brain is a system that works because of physics. (Also the achievement would go to the person who started giving out Nobel prizes to computer scientists) Whether the brain is predetermined or not is irrelevant because we do not know the future. It's new to us. And it's not necessarily even accurate but I will leave that up to those that know more about quantum mechanics if they want to go down that rabbit hole. Even the term predetermined is equivocation. Predetermined by what? Citation required. I don't believe in free-will. I don't act like an evil [snip more]because I have empathy. Empathy is an evolved instinct that is implemented by my brain. I am sensing religious overtones from you. I hear this argument the whole time from theists who argue that without their god there is no morality. Lose faith in their own free will? Faith is belief without evidence. It does not alter reality.
  2. I read some really horrible papers once that were written in the 60's from the very beginnings of research into synthetic emotions. It was painful. Not because they were difficult or incomprehensible but because they were so wrong. They were trying to argue that emotions acted in the same way that interrupts do in computers. It was wrong because they were thinking only in terms of binary states. My supervisor explained to me that it was all the rage then to think of the brain working like a computer. You can apply information theory to understand how the brain works. You can make the case that dendritic trees can implement binary logic and that neurons send off binary spikes. But the brain at its heart is a noisy analog system. Underlying all these mechanisms are analog cells, chemicals and electrical impulses. For example, neurons may fire binary spikes, but they also have continuous firing rates that change over time and go in and out of phase with other firing patterns from other neurons. Neurons habituate and lose voltage over time. Connections between neurons change, strengthen and weaken gradually over time. Throw in neuromodulators, long term potentiation and depression and you have gain controls for different parts of the brain. Personally speaking I think the brain is best understood as a physical thermodynamical self-organising system.
  3. I googled it and then found a link to it further down the page http://www.cs.bham.ac.uk/research/projects/cogaff/Aaron.Sloman_freewill.pdf It's been a very long time since I read it though.
  4. I really recommend reading Eric Chaisson. He's an astrophysicist but he writes about the rise of complexity from the big bang to modern day society. I found it really eye opening because you start to see how complexity arises from thermodynamic gradients created by perturbations in the big bang and driven by the expansion of the universe. The book is "Epic of Evolution: Seven ages of the Cosmos" but he also has some papers on-line that discuss this. I first came across his work at a very opportune moment when reading the New Scientist. You can think of the biological age as just one stage along the arrow of time. Another fantastic book is "Into the Cool: Energy Flow, Thermodynamics and Life" which is more focused on abiogenesis but discusses the subject in terms of non-equilibrium thermodynamics.
  5. And conversely you can take it the other way and think of corporations as multi-people organisms in the same way that an ant or bee colony can be understood as a single organism. Either way it can quite often be useful to think in terms of the rise of complexity over time. For example, when trying to understand what emotions it can be fruitful to think of how single cellular organisms might have inadvertently started to communicate to each other via chemicals when they sensed the same chemical environment that they were affecting.
  6. Aaron Sloman once wrote a really good paper going through every possible definition of free will and demonstrating how it was inadequate. I can't find the paper on-line anymore although I am sure I have a copy on some hard disk somewhere. I just see the concept of free will as a quagmire on a par with the concept of 'qualia'.
  7. You can completely ignore the whole issue of free-will. It's not useful when talking about intelligence. It's like talking about a whether a water wheel chooses to turn round because of the flow of water. Instead it's more accurate to talk about a water wheel turning round because the flow is strong enough. What's probably better to look at is the balance between cognition and emotions and / or instinct. Cognition widens the possibilities when performing action selection. Emotions and instincts narrow the available choices.
  8. The question "What is free will?" presupposes that free-will exists. I don't believe in free-will in as much as I don't think it can ever be properly defined. Definitions are tools and they must be useful. It is not useful to talk about free will when trying to figure out how a brain works or how we can build an A.I. The concept of free-will can only ever be useful when refering to something external to an agent. For example, "I signed the contract out of my own free will rather than because a gun was put to my head". This may be useful in a legal context but unfortunately does not help with discussions involving philosophy, science or engineering. It's not a useful term when discussing the internal workings of an agent's brain because there is no clear demarcation between when you would and would not use it.
  9. The reason for using an intelligent agent rather than an automaton is that the former has more autonomy. This means that it can be more productive and requires less supervision which saves on user costs. Compare a vacuum cleaning robot with a vacuum cleaner. The former can be set off to clean the carpet while you can go off and stack the dishwasher. The latter needs to be manually pushed around the carpet. The same applies to computers. A machine intelligence (I'm coming round to the idea that there is value in using this definition) can be designed to perform more complex tasks with fewer instructions from the user. On the other hand if you've ever written a computer program you will quickly appreciate that everything needs to be explicitely stated and that the program can crash whenever there is any ambiguity. This becomes even more important if you are talking about robots on other planets where there is significant lag in communication or which are used in hostile environments. Even the vacuum cleaning robot described above will benefit from more intelligence because the thing about real world environments is that you cannot anticipate in advance everything that an intelligent agent will encounter. What if the owner has a pet dog that barks at the robot? Or a hamster? Or the dog poops on the carpet and the robot tries sucking it up? Or the robot encounters stairs? There will always be new things that your explicit programming won't cover. Lastly, the automaton will not necessarily be cheaper. The initial R&D will be cheaper for an automaton but once you have an intelligent computer program you can make as many copies as you want and this will save on costs for the user because the agent requires less supervision and can be more productive. The cost of the body will be the same for an intelligent robot and an automaton.
  10. I don't think it is that much different. Although to me work is not really that much different to whoring yourself. You're selling the use of your brain and your body (to a certain extent) for a limited period of time. At least dogs and possibly soon robots too get the chance to get pleasure from their work. I don't know how many people have a job they actually enjoy doing.
  11. Another reason to create truely intelligent agents is because by doing so we better understand ourselves.
  12. With regard to the OP about synthetic intelligent agents being slaves programmed to take pleasure from their jobs, how is this different from training dogs to sniff out explosives or to lead blind people about?
  13. While this is technically true you are also merely arguing about definitions. Definitions are tools and should only really be discussed in terms of how useful they are. If a definition is no longer useful then we should change it. The 'Artificial' in A.I. is a historical hang-over from classical A.I. and is still in use because people are familiar with it. It's actually better to talk about Synthetic Intelligence but it's just not really an issue yet because the field just hasn't made enough progress. But I do appreciate that (hopefully) there will one day be the need to differentiate between true synthetic intelligence and smart programming even though both of them are artificial rather than naturally occurring. At the moment though, 95% of the field of A.I. is essentially just trickery. It's interesting to note though that the first journal in the field of artificial emotions is the International Journal of Synthetic Emotions. Possibly because the term "Artificial Emotions" hasn't had a chance to catch on. Sapience may be more specifically concerned with intelligence than sentience, but some argue that intelligence is not possible without the ability to feel. Minsky for example posed the question of whether you can have intelligence without emotions. There are many good reasons to believe that you cannot. I would also question whether you can have sentience without some degree of sapience. As with all other troublesome definitions, intelligence can be plotted on a sliding scale. Some agents are more intelligent than others. At the bottom of the scale are the stimulus / response agents. So at what arbitrary point do we say that something is sapient or sentient? The same could be said for consciousness.
×
×
  • Create New...