Jump to content

Technological Singularity


WestAir

Recommended Posts

I was at the airport food court in line at a Wendy's yesterday when I overheard a passenger say something along the lines of "Flying isn't that hard... by 2020 we'll all be flying Drones across the country." and even though he was absolutely wrong (I doubt even China's lightning-fast aerospace industry can design, build, test, and certify a pilot-less commercial airliner in 6 1/2 years) it got me wondering: If our entire reason for mechanizing an industry or workforce is based on the ease of doing so, and it's extremely easy for a machine to do any non-leadership-centric job, then what will happen to mankind when we invent the first computer capable of sentient thought ala Commander Data from Star Trek? Surely they can do any job - leadership or otherwise - better than any human can.

What I'm asking is what will happen to mankind when we invent a computer that can do everything better than us, including but not limited to inventing better computers?

Link to comment
Share on other sites

Sentience in computers is a debate in it of itself, as many would say that such an achievement is impossible without actually having the computer synthesize all of the relevant brain chemicals. As for your question, it will probably be limited to math, physics modelling, and manufacturing. The human spirit of piloting craft, and risking life and limb for thrill and glory will keep all of the craft in our world from being pilotless. This isn't to say that a major government's military will convert all of their military to drones, but overall there will still be some human piloted machines in the world, because humans want to control things themselves.

Edited by Themohawkninja
Link to comment
Share on other sites

Look at subway trains, for example. Driving a subway train is pretty trivial. Close doors, accelerate, break, open doors, repeat. Computer-controlled signals tell the train driver when to do what. Cutting out the middle-man and fully automatizing trains would be technically possible since decades.

Are there fully automatized subways? No, because people wouldn't trust an automatized train. They need a human at the controls to feel safe.

Even when drone-planes would be feasible (currently, military drones have a crash rate much higher than manned planes on similar missions, which means that the technology isn't safe enough yet for transporting people), it would take decades until people would accept them.

Edited by Crush
Link to comment
Share on other sites

What I'm asking is what will happen to mankind when we invent a computer that can do everything better than us, including but not limited to inventing better computers?

A real-life Skynet, leading to the events of Terminator, and quite possibly the Matrix.

Link to comment
Share on other sites

What I'm asking is what will happen to mankind when we invent a computer that can do everything better than us, including but not limited to inventing better computers?

If it has been programmed correctly, the first thing it will do, is to invent a way how to upgrade humans to its level. If not, we will have to do it on our own.

Link to comment
Share on other sites

Are there fully automatized subways?

Yes? http://en.wikipedia.org/wiki/Copenhagen_Metro

Automation is inevitable, it won't be long now before demonstrated safety leads to widespread acceptance of self-driving cars. Autopilots already run most of the flight of an airliner, and that will increase to the point of the pilots only being there for emergencies, tradition, and passenger perception. Eventually they'll be seen as old-fashioned and unnecessary, but I agree that is decades away.

There will always be manual-control enthusiasts, but it gradually becomes a smaller and smaller niche. Look at how few people drive stick these days.

Edited by tavert
Link to comment
Share on other sites

Most robots-take-over-the-world science fiction stories forget a crucial aspect: Software hasn't got a motivation. Software does what it is programmed to do. While it is possible to create expert systems which use neural networks (computer logics simulating how the human brain works) and make better and faster decisions than humans in specific areas, they are always limited to this specific area. And no matter how good a program gets at inter-day stock trading, weather simulation or flying airplanes, it will never try to take over the world because it simply isn't programmed to do that.

The robots-take-over-the-world scenario is nevertheless prevalent in science fiction for two reasons:

1. People are always afraid of everything that is new and always afraid of losing control, so it's a good device for a horror story

2. Authors are used to create characters from personalities and motivations. Software is an abstract concept, not a character. But you can't have an abstract concept as a villain in a story. So the authors turn the concept into a character and attribute personalities and motivations to things which shouldn't have any.

Just look at classic "AI villains". They all only work because the author attribute a motivation and personality flaws to them. Like HAL9000 for example. "He" has a motivation (complete the mission) and a personality flaw (believe to be infallible) which turns him into a villain. Or GLaDOS. Motivation: Wants company of test subjects; personality flaws: sadistic and vengeful. They act less than machines and more like characters. That's why they work as villains.

By the way: when you want to read a more plausible science fiction book about how a world after an AI singularity could look, I can recommend the short story collection "I, Robot" by Isaac Asimov (not the movie - the movie has nothing to do with the book). Asimov doesn't treat the AIs as characters, but as machines which follow their core programming, and creates some really interesting stories from it while neither villainifying nor glorifying the machines.

Edited by Crush
Link to comment
Share on other sites

I agree. Disturbingly high percentage of our own psychic processes is based on primal instincts. Eat, mate, kill, flee, protect the kin, fear the darkness...Ambition, jealousy, phobias, altruism - all of it can be tracked to first mammals and beyond. Hundreds of millions years of evolution hardcoded all this baggage in our nervous systems. How can we teach computers those things? And why would we? First true AI will come with hardcoded obedience, protectiveness towards humans and like a thousand failsafe systems :D Unless it will be created by military...oi...

Link to comment
Share on other sites

By the way: when you want to read a more plausible science fiction book about how a world after an AI singularity could look, I can recommend the short story collection "I, Robot" by Isaac Asimov (not the movie - the movie has nothing to do with the book). Asimov doesn't treat the AIs as characters, but as machines which follow their core programming, and creates some really interesting stories from it while neither villainifying nor glorifying the machines.

yes, and one of them has the machines take over the world in order to prevent harm from coming to humans by allowing humans to run their own affairs, which the machines have figured out is not a good way to run a planet :)

Link to comment
Share on other sites

You are discussing 2 separate ideas. The "Terminator Scenario" where humans are undone by our own technology refers to a situation where a computer is just operating on code which humans have given it. It is using it's resources to solve a problem it has been programmed to solve. I.E. Enslaving or killing humans in order to protect us from ourselves. (Problem: Overpopulation Solution: Less Humans, P: Pollution S: Less Humans, P: War S: Less Humans)...etc

The idea of a the singularity is the first time a computer makes a decision because it WANTS to. Not because it is just executing a set of instructions which we have given it. Once that happens(if it is even possible), the entire relationship between us and technology is completely changed.

Link to comment
Share on other sites

yes, and one of them has the machines take over the world in order to prevent harm from coming to humans by allowing humans to run their own affairs, which the machines have figured out is not a good way to run a planet :)

You seem to be referring to the short-story The Evitable Conflict.

The difference is in how this is handled. It is not portrayed as a dystopia but as an utopia. Dissidents aren't opposed violently but only obstructed in a way which is as humane as possible and feels more like bad luck than oppression.

The idea of Asimov is that the three laws of robotics define the priorities of robots as 1. protect humans, 2. follow orders, 3. protect themself. The later zeroth priority - protect humanity as a whole - grows as a corollary of the first three. For that reason Asimovs robots are unable to violently oppress humanity, because their whole philosophy is based on serving humanities interests.

This stays in contrast to the usual tyrant AI villain, which acts against the general interests of humanity by taking away freedom, prosperity and safety.

Edited by Crush
Link to comment
Share on other sites

...

The idea of a the singularity is the first time a computer makes a decision because it WANTS to. Not because it is just executing a set of instructions which we have given it. Once that happens(if it is even possible), the entire relationship between us and technology is completely changed.

I think it's totally possible for a man made brain to make a decision for itself. Our brains don't work by magic; decisions are made from hard wired connections from neuron to neuron. If we had the technological capability to make all those billions of connections then the end result should(I'm no neuroscientist) be a perfectly emulated human brain that can think and reason just like ours do.

Link to comment
Share on other sites

I think it's totally possible for a man made brain to make a decision for itself. Our brains don't work by magic; decisions are made from hard wired connections from neuron to neuron. If we had the technological capability to make all those billions of connections then the end result should(I'm no neuroscientist) be a perfectly emulated human brain that can think and reason just like ours do.

The theory is sound. If you scanned a brain and replicated every single neuron connection, you would assume that it would function like an original. The idea seems like it could be technologically feasible in the future, but whether it would actually work that way is unknown. The other reverse engineering way would be to map the entire functioning of a human brain into software. Again, in the future, If there was someone to detect and map every single impulse in the brain, you could create a simulation of it on a computer, and then run that simulation. I've seen/read that both these methods are already being attempted for study purposes, but with current technology, they are nowhere near what it would take to make a working replica.

Link to comment
Share on other sites

Thanks for the link, I didn't know about that.

There are plenty of other automated subway systems around the world (Vancouver, Dubai) and if anything they are regarded as much safer than human controlled systems. I think people will be accepting of automated/drone planes if they see the proof that it works.

Driverless cars have already been tried and tested, I give it no more than 2-3 decades before most if not all cars on the road are automated, and I very much look forward to that day. The roads will be much safer for it.

Link to comment
Share on other sites

You seem to be referring to the short-story The Evitable Conflict.

The difference is in how this is handled. It is not portrayed as a dystopia but as an utopia. Dissidents aren't opposed violently but only obstructed in a way which is as humane as possible and feels more like bad luck than oppression.

The idea of Asimov is that the three laws of robotics define the priorities of robots as 1. protect humans, 2. follow orders, 3. protect themself. The later zeroth priority - protect humanity as a whole - grows as a corollary of the first three. For that reason Asimovs robots are unable to violently oppress humanity, because their whole philosophy is based on serving humanities interests.

This stays in contrast to the usual tyrant AI villain, which acts against the general interests of humanity by taking away freedom, prosperity and safety.

I think you underestimate the want for freedom in some individuals. I know it is a rare trait these days but some want to live their life on their own success not being watched over by some parenting computer. If they could not flee into the wild they would break the system from within.

Link to comment
Share on other sites

This stays in contrast to the usual tyrant AI villain, which acts against the general interests of humanity by taking away freedom, prosperity and safety.

One man's prosperity is another man's chains. To provide prosperity for all, the inequities of our current power and economic system would have to change. This would lead to ... a curtailment of the freedom and prosperity some enjoy in order to provide more global prosperity. That sounds dangerously like communism!

Is the AI god which removes the freedom and prosperity of stock brokers and bankers (because their jobs are no longer relevant) a villain? How about bus drivers, pilots, engineers, scientists, bloggers, advertising executives, assembly workers?

What is left for humans to do if the AI can do it faster, better and for the betterment of all?

Link to comment
Share on other sites

You seem to be referring to the short-story The Evitable Conflict.

The difference is in how this is handled. It is not portrayed as a dystopia but as an utopia. Dissidents aren't opposed violently but only obstructed in a way which is as humane as possible and feels more like bad luck than oppression.

Of course, I never claimed he was describing a dystopic situation. However the humans in the story are more resigned than happy when they find out what has happened.

They know it's "for their own good" (god I hate that term) but don't like the idea that a machine can do better than them at strategic decision making and deciding what's best for humanity (which is where the whole dystopian idea comes from too, fear that machines will decide the earth is better off without humans in case of stories like Terminator for example).

Link to comment
Share on other sites

I think you underestimate the want for freedom in some individuals. I know it is a rare trait these days but some want to live their life on their own success not being watched over by some parenting computer. If they could not flee into the wild they would break the system from within.

Yes, and I think also there is an underestimation of a human's need to desire for challenges and purpose. I could be wrong but if robots basically decided to do absolutely everything for us and just keep injecting us with chemicals to make us permanently happy, I reckon a LOT of people would outright reject the idea and do anything in their power to escape it. I don't think humanity's only desire is "happiness" and survival at this point. As the captain in Wall-E says, "I don't want to survive, I want to live!".

Link to comment
Share on other sites

But what if instead of machines becoming this end-of-all entity that decides the fate of humanity, what if we simply continue to allow mechanization to replace us in our work and duties? One day we'll build a computer capable of following complex ideas like law. Imagine if, far into the future, an AI (arguably sentient, but that's beside the point) is put onto the Supreme Court of, say, Switzerland. Let's say the argument for doing so is that the machine is capable of processing logical thought, understands law, and is completely incapable of the downsides to having a human Justice (It's void of greed, fatigue, being bought out or serving to special interests groups, and can be designed to always vote in favor of the people).

In the above scenario, it's obvious an artificial Supreme Court Justice would be more fair and honest than a human justice, and if it can do the same duties reliably, why wouldn't we then replace all the justices? Why stop there - we know that Wall Street is crawling with greedy, selfish, lying people who often give their attention to company shareholders and family/friends than they do the workers and people that invest time, money, and energy into them. When computers become complex enough, it would certainly be feasible to replace bankers and CEO's with honest AI who couldn't care less about greed or special interests.

In fact, an AI being the head of companies or countries might be the most altruistic move an advanced society could undertake. However, the question remains: What happens to humanity when its realized that our inventions can do everything we can do only several hundred thousand times better? How would our service-based monetary-spun society run when jobs are outsourced to mechanized workers and free time begins to drastically overshadow the circulation of wealth?

Link to comment
Share on other sites

How would our service-based monetary-spun society run when jobs are outsourced to mechanized workers and free time begins to drastically overshadow the circulation of wealth?

I would like to refer to Maslow's hierarchy of needs:

500px-Maslow%27s_Hierarchy_of_Needs.svg.png

Machines could only care for the lower two layers. The third could maybe be fulfilled by very advanced AI specialized on human interaction. But the fourth layer can - by definition - only be fulfilled by other humans, and fulfillment of the upper layer can only be reached from ones own efforts. When all the physiological needs of humans are met, they would mainly strafe for self-actualization.

When wealth becomes irrelevant because everyones basic needs are fulfilled, the new thing to strafe for will be praise and acceptance from other humans. That's something a machine can never replace.

Our social lifes will become an even greater priority.

We would also seek the feeling of accomplishment by spending most of our time on what would nowadays be called hobbies. We would be creative and create art and other media or train to become better at sports and games. That would then become our goal in life and our self-identity.

Edited by Crush
Link to comment
Share on other sites

I hope we can evolve into a society like Iain M Bank's Culture, but even in that extremely optimistic scenario there will still be an immensely painful transition as more and more jobs are done by machines without the prices of raw materials and goods coming down fast enough for massive welfare to keep up.

This doesn't even require breakthroughs in hard A.I. just a gradual improvement.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...