Jump to content

"Do you trust this computer?" Documentary ft. Elon Musk


DAL59

Recommended Posts

3 hours ago, DAL59 said:

What about the AI box theory?  A sufficiently advanced AI should be able to convince anyone to do anything.

https://en.wikipedia.org/wiki/AI_box

Any early attempt will certainly be boxed in, and yeah, the basic premise is that once a "superintelligence" they will be able to convince people to aid them, or even use subterfuge to escape. At orders of magnitude higher clock speed than people, the world is sort of like Groundhog day to an AGI, after all.

Link to comment
Share on other sites

5 hours ago, tater said:

The bar is constantly moved.

Just like requirements for a game. But the mechanics stays the same.

4 hours ago, DAL59 said:

A sufficiently advanced AI should be able to convince anyone to do anything.

Still miles away.

Unless someone is hiding something up their cheeks...

Link to comment
Share on other sites

9 hours ago, tater said:

This is untrue, IMO. The point I think is recursive self-improvement. What if you make a narrowly intelligent system that writes its own code? Programmers can chose to not do that, but someone might as a way to accelerate the process to their goal. The programmers could quickly be in a situation where the code is apparently working, but they don't know exactly what it is doing. It's at least a concern (I certainly don't panic about this possibility, lol).

The arguments made by a few people that there is something here to be concerned about (not panic) are actually pretty compelling. It is not hatred at all, just competence that is required, and a set of values---even programmed by us---that is unintentionally incompatible with human self-interest.

The idea is that someone makes an artificial general intelligence that is at least human level at the sort of intellectual tasks a computer could do (thinking, basically---though this includes writing code, doing abstract work like theoretical physics, etc). The argument then basically says that for various reasons the system then improves itself well beyond human capability if for no other reason than clock speed (however fast the computer hardware can run the network vs the ~200 Hz our brains can operate at). I'm personally not terribly concerned about the classic scenarios, but they are certainly worth considering when assigning value systems to computers that can think at some point. Even narrow systems have problems in alignment. If Facebook has intelligent code whose goal is to somply to keep each member on the platform for as many minutes as possible per day, it can be less than ideal for humanity.

Code improvement works, as seen in normal software but its limited and has diminishing return. 
You also need hardware updates. 
Note that you don't need intelligence for automated code improvement, you can brute force it just testing and improving. 
But yes the we don't know how it work is an real issue both with AI, automated software design and even giant software projects. 

We do not know if an AI in an automated car might ignore an pedestrian in an yellow jacket. 
How to deal with it, testing is one, another is limit its authority, we do that with humans all the time. You typically has spending limits so you can not transfer 100 million to an tax paradise and jump on an plane. You have an second layer of systems, even mecanical who do not let you go outside bound and overload the reactor, an gun on an warship is limited so it can not fire on the bridge, if it could the shell would not have armed yet. 

And yes trying to use the main gun against the bridge will get you in trouble, yes you need intelligence to know this. 
Compare it to an animal we has use animals since forever, something like an horse or dog is potential dangerous to. 

Don't see AI weapons as an real danger, we have AI weapons now, not as drones but plenty of missiles and torpedoes has automated targeting. 
You will not use this if it can hit friendlies, same with AI drones, an automated sentry gun will work the same way as an minefield do with the exception that you can turn it on and off. Anyway and weapon need reloads and if would get deactivated rater than reloaded if it got problems. 
And no nobody would let an AI control strategic or even very heavy weapons like artillery without control. 

Now the real danger is more that the early AI will mess things up, like deleting all content on Amazon servers to store cat pictures :)
This will be the first learning experience of dealing with primitive general AI. 

Later then we get human level AI we face another issue simply that its an extremely powerful tool. It would be natural better of handling all sort of computer system as its directly linked. it could easy put most who primarily work on an computer out of work for one. 
 

Link to comment
Share on other sites

43 minutes ago, magnemoe said:

We do not know if an AI in an automated car might ignore an pedestrian in an yellow jacket.

We "know" this - it's a probability.

The only question is whether the training matches what is there IRL.

 

I've seen that some CAPTCHAs now use blurred versions of what they were asking before. Perhaps it's an attempt to improve things, but without any proof that the AI has "understood" something (we'll take that as supplying a visualization of what it saw), we'll never know whether it's enough for the next edge cases.

Link to comment
Share on other sites

1 hour ago, magnemoe said:

And no nobody would let an AI control strategic or even very heavy weapons like artillery without control. 

don't be to sure...  

Link to comment
Share on other sites

  • 2 weeks later...

Well, as a supporter of AI development, I kinda think that the idea of AI destroying humanity is ridiculous. When we look at how automation improve our world today, I say that we've applied artificial intelligence (albeit simple ones) on pretty much a lot of stuff. We have automated assembly robots in factories, we have smart missiles and smart bombs in our fighters, we have drones, we have autopilot, even roombas is basically an AI for cleaning floor. But then again, let's ask ourself: why we create AI? Not the simple one, but the one who can learn and interact like human. Pretty much the answer is to be able to learn/ interact with human. I prefer that the AI that's used on everyday life is limited to simple  and limited AI that only knows about what it's supposed to do (in other words, an AI for roombas cannot learn how to attach a flamethrower to burn down it's owner's house, they only know how to clean) and higher level AI (the one who can learn and interact with human) is limited in digitalized form only, where they can interact and learn with human as a digitized companion.

A bit example how higher level AI should function/ interact with humans:

Personally, I want human and AI coexist and understand each other :)

Link to comment
Share on other sites

I don't know if anyone has mentioned this yet, but I think everyone should read the excellent waitbutwhy article on AI from a couple of years back. It's a 2-parter and quite lengthy, but more than worth the time to read.

Really thought provoking, certainly gave me (an AI advocate) reason to pause and think about some of the implications and dangers that many people seem to dismiss quite quickly.

Edited by Steel
Link to comment
Share on other sites

2 hours ago, Steel said:

I think everyone should read the excellent waitbutwhy article on AI from a couple of years back.

The idea isn't too novel.

Also, to me, there's a question we have never actually looked at.

Bugs.

The world is full of imperfect. How do we know that this machine will have all the expectations we gave it in ?

Heck, we haven't even solved bugs that lay around the tech world since... ever.

Then there's the fundamental flaws.

The answer to the title question is "no", even today.

Link to comment
Share on other sites

1 minute ago, YNM said:

Bugs.

The world is full of imperfect. How do we know that this machine will have all the expectations we gave it in ?

The answer is well-known: open source!
Millions of
first grade students
bored coders
unemployed programmers
unrecognized geniuses
random people which you will never met (or wil but would be better if haven't).

Link to comment
Share on other sites

On 4/9/2018 at 3:27 AM, DAL59 said:

don't be to sure...  

Why should you, it require an team just the service the weapons and all an artillery cannon or heavy bomber do is hitting specified targets eiter planned or on request. 

Now one problem with AI over humans is that even if humans can go postal this is rare, an human know that if it kill many people he will never leave jail so chance of him using the self propelled gun on brigade headquarter because an dress down is very low, its also not so easy to pull of in practice. 
We don't know anything about AI psychology as we has not created one smart enough to have an psychology.
 

Link to comment
Share on other sites

19 minutes ago, kerbiloid said:

Do the CPU instructions depend on chemical injections?

Well, it depends on electrons flowing...

 

2 hours ago, magnemoe said:

We don't know anything about AI psychology as we has not created one smart enough to have an psychology.

You don't even need an AI capable of one for there to be a screw-up.

Bugs are about in every part of the computer/digital architecture humans made.

Link to comment
Share on other sites

6 minutes ago, YNM said:

Well, it depends on electrons flowing...

If you pour different hormones on CPU, will "xor eax, eax" give another result?
And then the previous result if pour another hormone?
Or some random result depending on the poured hormones?
Human algorithm stays the same, but probably weights of factors change.

6 minutes ago, YNM said:

Bugs are about in every part of the computer/digital architecture humans made.

As well as in human heads. But enough advanced algorithms compensate the errors brought by them.

P.S.
AI will never take a candy from a baby.
Unless it needs to separate carbohydrates from proteins.

Edited by kerbiloid
Link to comment
Share on other sites

1 minute ago, kerbiloid said:

If you pour different hormones on CPU, will "xor eax, eax" give another result?

The result of previous computations affects the one after it, if in sequence. Hormones are just that, the result of something affects something else.

2 minutes ago, kerbiloid said:

But enough advanced algorithms compensate the errors brought by them.

Even if the fault is in the method or the concept, like the examples I gave above ?

Link to comment
Share on other sites

10 minutes ago, YNM said:

Even if the fault is in the method or the concept, like the examples I gave above ?

If a system state is sensitive/vulnerable to a single factor, that only means that the system is oversimplified and you need to add loopbacks, inertia, parallel processes.
Also this means that then you have to look at more general system rather than its particular part.

Remember the basics of metrology and statistics: 
00eb0cde84f0a838a2de6db9f382866427aeb3bf

deviation of average is reversely proportional to the samples amount.

When you have 7 000 000 000 persons, everyone's headbugs mean 84000 times less than if you have to deal with one of them.

And with AI you can have kerbillions of synchronized AI personalities inside one server.
A whole galactic council.

Edited by kerbiloid
Link to comment
Share on other sites

1 minute ago, kerbiloid said:

If a system state is sensitive/vulnerable to a single factor, that only means that the system is oversimplified and you need to add loopbacks, inertia, parallel processes.

... all of which exist in the human-based software engineers /computer scientist for now.

And they *might* be wrong in something.

 

Let's not forget - when reliability increases, you're removing the bits that naturally has a higher chance to happen. But there's still the bits that we don't even know is even possible.

Southwest Airlines flight 1380. The first ever death in-air due to partial damage since a long time. Described as a "freak event". But face it - when reliability already soars high, every incident and accident is going to be a "freak" event. But that doesn't mean we just let it stands by.

Someone is going to investigate it. Then came up with reasons why it happen. Then hopefully, it effectively never occurs again. But we never know what other freak events will come.

 

"Perfect"ed our main theories might be, there are still cases where we just didn't ask. We just didn't bother. We had no idea.

And then there's the question that our "perfect" theories is still... rough in the edges.

And the same goes with computers and artificial inteligence.

 

From a book by Robert L. Wolke in the 'series' "What Einstein... ", I remembered (so do take this with some salt & sugar) a quote it says from Einstein, answering a question from a journalist "what's the newest in physics ?", he answered "well, have we figured all the old stuff ?".

And this is why embracing AI will be harder than before - we have to ready the base first. And that's not a job we tend to do.

Link to comment
Share on other sites

31 minutes ago, YNM said:

... all of which exist in the human-based software engineers /computer scientist for now.

On low level - certainly. But if a system in whole is vulnerable to a single error, this just means that it's oversimplified on high level.
Because if it wasn't - it wouldn't.

31 minutes ago, YNM said:

Southwest Airlines flight 1380. The first ever death in-air due to partial damage since a long time. Described as a "freak event". But face it - when reliability already soars high, every incident and accident is going to be a "freak" event.

This case is absolutely trivial.
Engines are just physical objects and don't burst just because.
Its condition is a particular part of more general system - "momentary state of this particular engine" vs "engine health checking and monitoring".
So, here you either would take such risk as appropriate (i.e. countermeasures to be undertaken for its decreasing would lead to additional activities or resources manufacturing causing more deaths per flight  https://en.wikipedia.org/wiki/Occupational_fatality),
or raise the system complexity: add more sensors, make engines more strong, check them more often.
As this is the first lethal accident for this company, it looks logical to assume that these measures and this way worked.

31 minutes ago, YNM said:

We just didn't bother. We had no idea.

We presume probabilities relatively appropriate and additional efforts excessive.

 

Edited by kerbiloid
Link to comment
Share on other sites

13 minutes ago, kerbiloid said:

But if a system in whole is vulnerable to a single error...

There isn't an error in the examples I gave above about computer "bugs". They performed as expected. It's just not desirable.

14 minutes ago, kerbiloid said:

Engines are just physical objects and don't burst just because.

But did you know the reason why ? Can you be absolutely sure of all the things that had and hadn't happened ?

Looks simple ? Yes.

Predicted for sure ? Not really.

Link to comment
Share on other sites

17 minutes ago, YNM said:

But did you know the reason why ?

Material objects usually get destroyed when their crystallographic defects grow, multiply, and merge.
At some point the invisible dislocations compose a crack which in turn grows extremely rapidly.
So, to prevent some thing unexpected destruction you have to monitor its crystallographic defects.
You don't need to drill them or so. Acoustic emission and Xray/gamma-photography is enough and does not change the detail state. 
So, if the part keeps crashing, then you should either perform the control procedures more often, either increase the measurements accuracy sensitivity. 
As you can see, in both cases solution is on the higher level of complexity.
Rapidity of the crack growth (as well as chemical reactions) doesn't matter here at all unless you want to know how fast will be the explosion.

17 minutes ago, YNM said:

Predicted for sure ? Not really.

Really-really. Just can be too expensive or require additional activities (say, driver trips) which would increase total fatality and are too expensive in sense of survival .

17 minutes ago, YNM said:

There isn't an error in the examples I gave above about computer "bugs". They performed as expected. It's just not desirable.

Maybe I don't understand their fast speech enough well, but as I can see, signal redundancy and stratification of protection countermeasures is the way.

Edited by kerbiloid
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...