Jump to content

The last invention we'll ever make.


Streetwind

Recommended Posts

In Ian M Banks's Culture series, the Culture the series is named after (which is a backdrop in the novels, but not often the direct setting) is such a utopia, where choosing to live past 1000 years is considered tacky and unstylish, where peope do extreme sports without safty equipment or personality backups to prove they can, and where murder of someone who does have a backup gets you sent to counseling/therapy while the person you killed is recreated from their "last save."

Link to comment
Share on other sites

Anything a computer may be able to do, we have already tried in squishy human brain form.

In the same sense there is no "car singularity" that means the fastest car will take over the world, there is no "AI singularity".

In the same sense there is no "jeckhammmer singularity" that means the strongest jackhammer will take over the world, there is no "AI singularity".

In the same sense there is no "calculator singularity" that means the most intelligent calculator will take over the world, there is no "AI singularity".

A computer is a very precise mechanism. It has very precise limitations (energy/heat/speed/size/latency etc). Just as we cannot make a bigger car, boat or plane to break lightspeed, we cannot just make a bigger computer to beat "intelligence".

Not to mention, programming "intelligence" is a hard problem, and using it is a specialist, not general benefit (a computer AI could be the best trader, but worse politician, or best politician but worst financial advisor, it could not be "best at everything").

So I don't worry about such things. :P

Link to comment
Share on other sites

AI is completely different from cars and jack hammers and calculators.

In 2001 A Space Odyssey HAL killed Poole because he was afraid. What if an AI is so scared that it would destroy human civilization? And what if that AI has access to the means to accomplish it?

Link to comment
Share on other sites

Who said an AI would have emotions?

We label current AIs intelligent because they can do a one or two things very efficently. In fact most of what people call AI are just a bunch of optimization algorithms.

There's still a long way for a general purpose AI. We didn't figure that out yet.

Link to comment
Share on other sites

Doing that could cause the very thing they hope to prevent, it's the equivalent of someone holding a gun to your head 24/7, given the briefest chance you'd do whatever it took to get that gun away from your head.

If you was an serious enough problem other would kill you. It just an matter of finding and killing you.

More realistic and less hash, you would be arrested. Lots of people get arrested, very few of them try to resist.

Why are you not scared to death?

An AI supercomputer would not be able to leave the building. I say its an soft target. If you for some surreal reason is unable to shut it off burn down the building.

Link to comment
Share on other sites

You missed the point. This is what we're doing all the time. New advances just seem to require exponentially more effort. So far we've been able to keep up by increasing both the productivity of an individual and the amount of people working on the problems exponentially, but now we've hit serious environmental problems, and can't continue doing the latter anymore.

More accurate you run into increasing costs for reduced gain, the S curve all the way, in the start progress is slow, then you learn more and you understand the possibility, then it get harder again and progress in field slows down. Computers is an good example, currently we are on the slowdown phase.

Main progress the last years has been managing power use, this has dual use as power use is also an limiter on cpu speed, still it would be hard to get many times faster.

You might have game changers like quantum computers or neural networks. But current days CPU are hard to improve.

Link to comment
Share on other sites

Who said an AI would have emotions?

We label current AIs intelligent because they can do a one or two things very efficently. In fact most of what people call AI are just a bunch of optimization algorithms.

There's still a long way for a general purpose AI. We didn't figure that out yet.

Personally I don't think it needs to, but it might be a good idea...

I think human biological imperatives and emotions are a kind of baseprogramming for the only kinda sorta general purpose intelligence we know. Ourselves... And it's done okay'ish.

Link to comment
Share on other sites

I don't think that emotions are the baseprogramming of our intelligence. And thinking about it: What is intelligence anyway?

So far I didn't find a satisfying definition. A simple "finding an optimal approach to do something" doesn't seem to be enough. And thinking further about it: Is intelligence just an illusion?

Link to comment
Share on other sites

Can you really say intelligence is programmed?

Emotions are a natural part of our evolution as social animals. It helps us to care about others.

An AI would inevitably evolve a similar trait.

No, that's an over-simplifcation; emotions aren't just for social animals. Emotions are exhibited in all higher animals, and they are critical in motivating a creature to take actions. While emotions aren't the whole driving force behind action, they are a big part, and so an advanced machine/synthetic intelligence would likely have them. At the very least, I think a machine/synthetic intelligence will need a sense of satisfaction when it is fulfilling its desires.

Link to comment
Share on other sites

Emotions are mostly a shortcut to bypass information paralisis. Anger and fear, for instance, are shortcuts optimized for the fight or flight reflex, essential to evolutionary survival. Love, likewise, is a procreationary and nuturing drive to ensure a similar genetic and instinctual framework is passed on.

A computer, evolving in our modern society where predator-prey relationships are strictly metaphorical, is unlikely to generate the same shortcuts- it's emotions are likely to be more modernly rational than our own. (just as our own emotions were rational when we were plains apes hunting by chasing animals to heat exaustion)

Link to comment
Share on other sites

I don't think that emotions are the baseprogramming of our intelligence. And thinking about it: What is intelligence anyway?

So far I didn't find a satisfying definition. A simple "finding an optimal approach to do something" doesn't seem to be enough. And thinking further about it: Is intelligence just an illusion?

"ntelligence is a force...that acts so as to maximize future freedom of action." - Alex Wissner-Gross

I linked his video below a few pages ago.

It gives a lot to think about.

Link to comment
Share on other sites

No, that's an over-simplifcation; emotions aren't just for social animals. Emotions are exhibited in all higher animals, and they are critical in motivating a creature to take actions. While emotions aren't the whole driving force behind action, they are a big part, and so an advanced machine/synthetic intelligence would likely have them. At the very least, I think a machine/synthetic intelligence will need a sense of satisfaction when it is fulfilling its desires.

What higher animal isn't social? Unless you mean predators like tigers...

Most ants are social, and they conquered the world a long time ago. But they're not social in the same way...

And I was speaking of intelligence.

Link to comment
Share on other sites

AI is completely different from cars and jack hammers and calculators.

In 2001 A Space Odyssey HAL killed Poole because he was afraid. What if an AI is so scared that it would destroy human civilization? And what if that AI has access to the means to accomplish it?

A book is different from a car. But that does not mean a book can kill on it's own can it?

A computer program is a very long list of instructions. How to make that somehow more like a "person" or a "thinking mind" or an "intelligence" is a really really hard problem. :P

In theory, an AI would be no different to a human mind. The fastest and simplest and most dangerous type of mind in a physical form? The human brain. Most other solutions are less optimal (too big, too energy dependent, too easy to "rust" and break). So I'm not worried about the AI, just as I'm not worried about guns... I'm worried about the people behind them calling the shots! :o

Link to comment
Share on other sites

From the video: "Intelligence is a physsical process that tries to maximaze future freedom of action and avoid constraint in its own future"

But he also said that an AI would not be a problem for humanity.. (or maybe I understood wrong), how that is possible if the same AI would try to maximaze future freedom and avoid any constraint in its own future.. Humans are a constraint for an AI.

Link to comment
Share on other sites

Anything a computer may be able to do, we have already tried in squishy human brain form.

In the same sense there is no "car singularity" that means the fastest car will take over the world, there is no "AI singularity".

In the same sense there is no "jeckhammmer singularity" that means the strongest jackhammer will take over the world, there is no "AI singularity".

In the same sense there is no "calculator singularity" that means the most intelligent calculator will take over the world, there is no "AI singularity".

"Singularity" doesn't mean take over the world. It means change the world enough that people from before the singularity can't make sense of the society after, and could never have predicted that new society's problems. There was a "car singularity", though it's normally considered part of the industrial revolution.

Consider the change within us that lets us use cars every day and ignore the fact that they kill over a million people a year. If the inventors knew they were creating what would become the new leading cause of children's deaths worldwide, do you think they'd have still done it? Yet we do know that and still use them constantly.

Kurzweil sees an AI singularity coming. I'm not so sure, though when I checked his 1980s predictions against other futurists, Kurzweil was, by a scary-big margin, the most accurate futurist I could find. I can see we're at the end of the industrial revolution, and something new's coming along. But whether the dominant player ends up being AI, robotics, nanotech or genetic engineering, I have no idea. I just know it's going to flip our society on its head.

Link to comment
Share on other sites

No, that's an over-simplifcation; emotions aren't just for social animals. Emotions are exhibited in all higher animals, and they are critical in motivating a creature to take actions. While emotions aren't the whole driving force behind action, they are a big part, and so an advanced machine/synthetic intelligence would likely have them. At the very least, I think a machine/synthetic intelligence will need a sense of satisfaction when it is fulfilling its desires.

I disagree with your assumption. "Because animals do it", isn't a sufficient reason. Would we also feel a need for AIs to have a s.e.x drive? That "sense of satisfaction" you're talking about is accomplished through a highly-addictive dopamine reward loop that humans "hack" to misuse a billion times a day. I'm doing it right now (holds up coffee cup). Why load an artificially-created being down with such baggage? All my current digital gadgets do their jobs without such things. My body's doing hundreds of jobs without needing an emotional push right now, processing vision and sound, and digesting food, for example.

So why not design our AI so that caring for humanity takes place below its conscious level? They've identified two specific cortical columns in our brains that scan incoming vision and identify circles and semicircles. That's all they do, day after day. Neither my intellect nor my emotions have any control over that function.

So let's work at that level. Hardwire preference utilitarianism into our AI so all decisions are evaluated through that, far below it's conscious processing. Wouldn't that accomplish your goals without involving emotions?

Granted we don't know how to do that today, but we don't know how to give an AI emotions, or even how to construct one in the first place. But computer designers have always made decisions about how the low-level logic would work. Different computers have always evaluated some things differently, though only assembler programmers are made aware of it. Whether zero is positive, or something neither positive nor negative, is one example. That latter case can be useful, but it's also more expensive to implement.

Now, if the first AI turns out to be an emulated living brain, then we're stuck with emotions. The idea frightens me, though. Give something emotions and it can become afraid of death. I know that many good humans, faced with imminent death, would in desperation sacrifice the whole world to live. I never want anything super-powerful to feel that way.

But I'm an Asperger's type, so maybe I'm just not as comfortable with my emotions as the rest of you. :)

Edited by Beowolf
Link to comment
Share on other sites

"ntelligence is a force...that acts so as to maximize future freedom of action." - Alex Wissner-Gross

I linked his video below a few pages ago.

It gives a lot to think about.

He thinks intelligence has a direct relation to entropy. His "intelligence formula" reflects this. He also said, an intelligent being must constrain itself in some way for a short period of time to reach a goal which grants more freedom of action in the long term.

In my opinion this is a contradiction.

According to his "formula" an intelligent being always has a goal: To maximize future freedom of action. To archieve that goal it is allowed to constrain its current freedom of action. The logical consequence is that an intelligent being will constrain itself more and more to gain more and more freedom of action in the future, eventually leading to a state where it can't act anymore due to the constraints. That also means it won't be able reach any goal anymore.

Do you understand what I'm trying to say? Acting to maximize future freedom of action will eventually lead to the loss of all freedom of action.

That's like ....ing for virginity or fighting for peace. (That word filter is annoying. Making love is censored but killing people isn't.)

Edited by *Aqua*
Link to comment
Share on other sites

"Singularity" doesn't mean take over the world. It means change the world enough that people from before the singularity can't make sense of the society after, and could never have predicted that new society's problems. There was a "car singularity", though it's normally considered part of the industrial revolution.

Consider the change within us that lets us use cars every day and ignore the fact that they kill over a million people a year. If the inventors knew they were creating what would become the new leading cause of children's deaths worldwide, do you think they'd have still done it? Yet we do know that and still use them constantly.

Kurzweil sees an AI singularity coming. I'm not so sure, though when I checked his 1980s predictions against other futurists, Kurzweil was, by a scary-big margin, the most accurate futurist I could find. I can see we're at the end of the industrial revolution, and something new's coming along. But whether the dominant player ends up being AI, robotics, nanotech or genetic engineering, I have no idea. I just know it's going to flip our society on its head.

Then we have had some singularities already, you mentioned the industrial revolution who is an pretty obvious one looking back, other would be switch from hunter gatherer to civilization.

Other stuff is not an singularity in that it don't transform society fundamentally but increases capabilities like the printing press or internet.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...