Jump to content

Karla

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by Karla

  1. It does not make sense to talk about being slave to your brain or physics. You are your brain. It's like saying that a table is slave to being a table because it has legs and a flat top. It's a meaningless statement that adds no information. So if I am a Nobel prize winning computer scientist the achievement would belong to me because I am my brain. Without my brain there is no me. My brain is a system that works because of physics. (Also the achievement would go to the person who started giving out Nobel prizes to computer scientists) Whether the brain is predetermined or not is irrelevant because we do not know the future. It's new to us. And it's not necessarily even accurate but I will leave that up to those that know more about quantum mechanics if they want to go down that rabbit hole. Even the term predetermined is equivocation. Predetermined by what? Citation required. I don't believe in free-will. I don't act like an evil [snip more]because I have empathy. Empathy is an evolved instinct that is implemented by my brain. I am sensing religious overtones from you. I hear this argument the whole time from theists who argue that without their god there is no morality. Lose faith in their own free will? Faith is belief without evidence. It does not alter reality.
  2. I read some really horrible papers once that were written in the 60's from the very beginnings of research into synthetic emotions. It was painful. Not because they were difficult or incomprehensible but because they were so wrong. They were trying to argue that emotions acted in the same way that interrupts do in computers. It was wrong because they were thinking only in terms of binary states. My supervisor explained to me that it was all the rage then to think of the brain working like a computer. You can apply information theory to understand how the brain works. You can make the case that dendritic trees can implement binary logic and that neurons send off binary spikes. But the brain at its heart is a noisy analog system. Underlying all these mechanisms are analog cells, chemicals and electrical impulses. For example, neurons may fire binary spikes, but they also have continuous firing rates that change over time and go in and out of phase with other firing patterns from other neurons. Neurons habituate and lose voltage over time. Connections between neurons change, strengthen and weaken gradually over time. Throw in neuromodulators, long term potentiation and depression and you have gain controls for different parts of the brain. Personally speaking I think the brain is best understood as a physical thermodynamical self-organising system.
  3. I googled it and then found a link to it further down the page http://www.cs.bham.ac.uk/research/projects/cogaff/Aaron.Sloman_freewill.pdf It's been a very long time since I read it though.
  4. I really recommend reading Eric Chaisson. He's an astrophysicist but he writes about the rise of complexity from the big bang to modern day society. I found it really eye opening because you start to see how complexity arises from thermodynamic gradients created by perturbations in the big bang and driven by the expansion of the universe. The book is "Epic of Evolution: Seven ages of the Cosmos" but he also has some papers on-line that discuss this. I first came across his work at a very opportune moment when reading the New Scientist. You can think of the biological age as just one stage along the arrow of time. Another fantastic book is "Into the Cool: Energy Flow, Thermodynamics and Life" which is more focused on abiogenesis but discusses the subject in terms of non-equilibrium thermodynamics.
  5. And conversely you can take it the other way and think of corporations as multi-people organisms in the same way that an ant or bee colony can be understood as a single organism. Either way it can quite often be useful to think in terms of the rise of complexity over time. For example, when trying to understand what emotions it can be fruitful to think of how single cellular organisms might have inadvertently started to communicate to each other via chemicals when they sensed the same chemical environment that they were affecting.
  6. Aaron Sloman once wrote a really good paper going through every possible definition of free will and demonstrating how it was inadequate. I can't find the paper on-line anymore although I am sure I have a copy on some hard disk somewhere. I just see the concept of free will as a quagmire on a par with the concept of 'qualia'.
  7. You can completely ignore the whole issue of free-will. It's not useful when talking about intelligence. It's like talking about a whether a water wheel chooses to turn round because of the flow of water. Instead it's more accurate to talk about a water wheel turning round because the flow is strong enough. What's probably better to look at is the balance between cognition and emotions and / or instinct. Cognition widens the possibilities when performing action selection. Emotions and instincts narrow the available choices.
  8. The question "What is free will?" presupposes that free-will exists. I don't believe in free-will in as much as I don't think it can ever be properly defined. Definitions are tools and they must be useful. It is not useful to talk about free will when trying to figure out how a brain works or how we can build an A.I. The concept of free-will can only ever be useful when refering to something external to an agent. For example, "I signed the contract out of my own free will rather than because a gun was put to my head". This may be useful in a legal context but unfortunately does not help with discussions involving philosophy, science or engineering. It's not a useful term when discussing the internal workings of an agent's brain because there is no clear demarcation between when you would and would not use it.
  9. The reason for using an intelligent agent rather than an automaton is that the former has more autonomy. This means that it can be more productive and requires less supervision which saves on user costs. Compare a vacuum cleaning robot with a vacuum cleaner. The former can be set off to clean the carpet while you can go off and stack the dishwasher. The latter needs to be manually pushed around the carpet. The same applies to computers. A machine intelligence (I'm coming round to the idea that there is value in using this definition) can be designed to perform more complex tasks with fewer instructions from the user. On the other hand if you've ever written a computer program you will quickly appreciate that everything needs to be explicitely stated and that the program can crash whenever there is any ambiguity. This becomes even more important if you are talking about robots on other planets where there is significant lag in communication or which are used in hostile environments. Even the vacuum cleaning robot described above will benefit from more intelligence because the thing about real world environments is that you cannot anticipate in advance everything that an intelligent agent will encounter. What if the owner has a pet dog that barks at the robot? Or a hamster? Or the dog poops on the carpet and the robot tries sucking it up? Or the robot encounters stairs? There will always be new things that your explicit programming won't cover. Lastly, the automaton will not necessarily be cheaper. The initial R&D will be cheaper for an automaton but once you have an intelligent computer program you can make as many copies as you want and this will save on costs for the user because the agent requires less supervision and can be more productive. The cost of the body will be the same for an intelligent robot and an automaton.
  10. I don't think it is that much different. Although to me work is not really that much different to whoring yourself. You're selling the use of your brain and your body (to a certain extent) for a limited period of time. At least dogs and possibly soon robots too get the chance to get pleasure from their work. I don't know how many people have a job they actually enjoy doing.
  11. Another reason to create truely intelligent agents is because by doing so we better understand ourselves.
  12. With regard to the OP about synthetic intelligent agents being slaves programmed to take pleasure from their jobs, how is this different from training dogs to sniff out explosives or to lead blind people about?
  13. While this is technically true you are also merely arguing about definitions. Definitions are tools and should only really be discussed in terms of how useful they are. If a definition is no longer useful then we should change it. The 'Artificial' in A.I. is a historical hang-over from classical A.I. and is still in use because people are familiar with it. It's actually better to talk about Synthetic Intelligence but it's just not really an issue yet because the field just hasn't made enough progress. But I do appreciate that (hopefully) there will one day be the need to differentiate between true synthetic intelligence and smart programming even though both of them are artificial rather than naturally occurring. At the moment though, 95% of the field of A.I. is essentially just trickery. It's interesting to note though that the first journal in the field of artificial emotions is the International Journal of Synthetic Emotions. Possibly because the term "Artificial Emotions" hasn't had a chance to catch on. Sapience may be more specifically concerned with intelligence than sentience, but some argue that intelligence is not possible without the ability to feel. Minsky for example posed the question of whether you can have intelligence without emotions. There are many good reasons to believe that you cannot. I would also question whether you can have sentience without some degree of sapience. As with all other troublesome definitions, intelligence can be plotted on a sliding scale. Some agents are more intelligent than others. At the bottom of the scale are the stimulus / response agents. So at what arbitrary point do we say that something is sapient or sentient? The same could be said for consciousness.
  14. Built a double decker asparagus launcher and set course from Kerbin to Eloo via low Kerbol orbit. I drifted a bit during the hour long burn so I don't know if I will miss Eloo.
  15. I had recently built a double decker asparagus rocket using orange tanks. Then I found this thread and decided to upgrade. It drops four double tanks at a time. I basically start with a cross and extend it by one then wrap it around the centre. This left me with a gap in the outer middle which was going to be too hard to put the last tank into so I had to change the order that they dropped. Unfortunately this meant that I ended up with a flying swastika. On top of the rocket is a pod with a science lab and loads of experiments strapped to it. Writing this, I've just realised that I forgot to put parachutes on it ... Doh! The last stage has four atomic engines strapped to it. Once I got the rocket into orbit I decided to try for Eloo regardless of transfer windows and such things. I added a manoeuvre and the only course I could find was Kerbin -> Low Solar orbit -> Eloo. I thought to myself, yeah, let's try for it! The burn took a fair while and the rocket seemed to drift off course for a while. I also don't have much fuel left for slowing down once I reach Eloo. If it doesn't manage it I wonder if Eloo will sling shot it? I might also have drifted too far off course and not make Eloo at all, but if that's so then I am sure they will meet up with many other planets as they continue to orbit the solar system.
  16. I tried using a grabber to remove some debris around Mun. I was going to grab it, de-orbit it and then let go. Aside from being hideously expensive with fuel, the grabber wouldn't grab it. I've managed to keep the orbit around Kerbin free of debris so far despite building a space station around Mun. Still, it was a useful exercise. I've mainly been practising docking manoeuvres around Mun and trying to remember how to spell 'manoeuvres' when doing quick saves.
  17. I've mostly watched Scott Manley videos so I can listen to his lovely voice. I then get my husband who also has a lovely Scottish accent and who has watched the videos to show me what to do instead. I find it a far more enjoyable way to learn but it does stop my husband from playing the game himself every waking hour.
  18. Hi all, My husband found out about KSP from XKCD. He enjoyed the demo and said it was the game he'd been looking for for years, so I bought it for him on steam as a gift (I'm the only bread winner until he gets good enough at German). He's been playing it every hour of the day since then. I decided to buy it for myself as well. I've been enjoying it quite a bit. It's so difficult finding games which are genuinely new and interesting. I also like learning new skills, even ones which are useless in every day life like orbital mechanics! I'm taking it slow and trying to get really good at each stage before I move on.
×
×
  • Create New...