Jump to content

"Do you trust this computer?" Documentary ft. Elon Musk


DAL59

Recommended Posts

We've reached a point in history where literally anything could occur within the next decade.  Drone wars destroy civilization? Sure. No skynet, but there are androids? Sure.  Countries use AI for 100% effective propaganda? Sure.  Everyone lives longer and is richer? Sure.  Rich people use robots to take over small nations? Sure.  AI causes economic collapse? Sure.  AI that is superintelligent but not sentient? Sure.  AI that is sentient? Maybe.  

 

 

Link to comment
Share on other sites

AI is only to be feared if we are irresponsible with it. Same with people, really. AI needs to learn things. And if we make sure AIs learn the proper things, then there's nothing to fear. Of course, people are likely to be irresponsible...

Link to comment
Share on other sites

9 hours ago, DAL59 said:

We've reached a point in history where literally anything could occur within the next decade. 

Would have been the same any other time.

 

First of all, I should mention that all the stuff in the website were broken, at least through my connection.

Second of all, regarding AI capabilities itself, they'll always only be as capable as what their human creator was capable (and remembered) of.

 

As always, cock-up before conspiracy.

 

EDIT : The connection problem arose from our dickish vimeo blockages. I bet everyone else is fine...

Edited by YNM
Link to comment
Share on other sites

2 hours ago, YNM said:

conspiracy

I don´t see a conspiracy. I see a bunch of people who invented something, which they don´t understand, so they are concerned about it. Is that concern justified?

Edited by rudi1291
Link to comment
Share on other sites

1 hour ago, rudi1291 said:

I see a bunch of people who invented something, which they don´t understand, so they are concerned about it.

And that's a cock-up.

The concern is conspiracy.

Never apply something you're too much unsure of. Unless you're the great gambler or something.

Edited by YNM
Link to comment
Share on other sites

10 hours ago, tater said:

ew people in AI think that anything crazy will happen within the next decade.

Elon Musk and Ray Kurtweil do.  

Link to comment
Share on other sites

6 minutes ago, DAL59 said:

Elon founded Neuralink and OpenAI, and Kurtzweil literally wrote the book on AI.

http://stargate.inf.elte.hu/~seci/fun/Kurzweil, Ray - Singularity Is Near, The (hardback ed) [v1.3].pdf

Wasn‘t talking about Kurtzweil, but just because Musk puts his money somewhere, doesn‘t mean he is automatically an Expert in the field.

Link to comment
Share on other sites

Kurzweil doesn’t claim that agi will be a thing in 10 years.

The average assessment when they did a poll at an AI conference was 40 years as I recall.

The alignment problem is a legitimate concern, and certainly should be solved before turning on a General AI. A couple decent books on the “we better be careful” side of things are Bostrom’s Superintelligence, and Tegmark’s Life 2.0.

 

Edited by tater
Link to comment
Share on other sites

IMO the filmmakers forgot about at least 3 billion people who have no connection to "advanced" electronics at all.

Even if our "modern" society falls, they might as well continue with whatever they knew anyway.

Link to comment
Share on other sites

10 hours ago, YNM said:

MO the filmmakers forgot about at least 3 billion people who have no connection to "advanced" electronics at all.

Not for long, with starlink and all, and cell phone use in poorer countries is increasing.  

Also, its not the topic of the documentary...  

Link to comment
Share on other sites

3 hours ago, DAL59 said:

Not for long, with starlink and all, and cell phone use in poorer countries is increasing.  

Tell that to them.

images?q=tbn:ANd9GcTIuB2B9AdSNO6pFAw8Re1

(bit extreme ends, but there are many more that is on similar levels in technology acquisition.)

 

3 hours ago, DAL59 said:

Also, its not the topic of the documentary...  

Now, on-topic :

IMO we would never quite stumble upon something unexpected. After all, we were the one who set the framework still; we know exactly what they're doing. In classic neural network, that's just about finding local minimums; in non-neural types, you could still paint a picture over what is there.

Their limits is their programmers.

Unless if they would deviate from it.

 

On the title question though, the answer is "no" - for all it's worth, there could be a power cut, the batteries could explode, the circuit shortened XD

Edited by YNM
Link to comment
Share on other sites

Seriously... why should i panic? When (if) we'll finally create AI, it will not be a supernatural god-like being able to warp reality, defy the laws of physics and kill me through my TV set with the power of pure hatred towards humanity.

Link to comment
Share on other sites

6 hours ago, YNM said:

IMO we would never quite stumble upon something unexpected. After all, we were the one who set the framework still; we know exactly what they're doing. In classic neural network, that's just about finding local minimums; in non-neural types, you could still paint a picture over what is there.

This is untrue, IMO. The point I think is recursive self-improvement. What if you make a narrowly intelligent system that writes its own code? Programmers can chose to not do that, but someone might as a way to accelerate the process to their goal. The programmers could quickly be in a situation where the code is apparently working, but they don't know exactly what it is doing. It's at least a concern (I certainly don't panic about this possibility, lol).

6 hours ago, Scotius said:

Seriously... why should i panic? When (if) we'll finally create AI, it will not be a supernatural god-like being able to warp reality, defy the laws of physics and kill me through my TV set with the power of pure hatred towards humanity.

The arguments made by a few people that there is something here to be concerned about (not panic) are actually pretty compelling. It is not hatred at all, just competence that is required, and a set of values---even programmed by us---that is unintentionally incompatible with human self-interest.

The idea is that someone makes an artificial general intelligence that is at least human level at the sort of intellectual tasks a computer could do (thinking, basically---though this includes writing code, doing abstract work like theoretical physics, etc). The argument then basically says that for various reasons the system then improves itself well beyond human capability if for no other reason than clock speed (however fast the computer hardware can run the network vs the ~200 Hz our brains can operate at). I'm personally not terribly concerned about the classic scenarios, but they are certainly worth considering when assigning value systems to computers that can think at some point. Even narrow systems have problems in alignment. If Facebook has intelligent code whose goal is to somply to keep each member on the platform for as many minutes as possible per day, it can be less than ideal for humanity.

Link to comment
Share on other sites

21 minutes ago, tater said:

What if you make a narrowly intelligent system that writes its own code?

Define "writing it's own code".

Neural net doesn't do it, it just change the strength of various codes. (and it doesn't write it's own assessment program code.)

"Standard" AI doesn't use an AI when evolving/adding "nodes" (so it's just another program like the macro for your tax excel or something).

 

IMO we will never quite see any surprise. When we finally write that program that adapts like a person (so train on one set of "object", but then show a completely different specimen and be able to distinguish it without extra training), we'll know why they're doing it.

Thing is, our programmers probably don't know how to reach so yet.

 

EDIT : And if anything, currently there's no AI that has any clue of what is it taking in. Despite it could classify what road sign is it and react properly from it (I bet the reaction is still hard-coded), you can't ask it what did the sign looks like. This is why we're still a long way away from AGI and so on.

Edited by YNM
Link to comment
Share on other sites

Some actual survey data on when AI researchers think various milestones might be reached. Highly variable answers.

https://arxiv.org/pdf/1705.08807.pdf

https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/

Note that these are from 2016, and they list defeating a best human Go player at years away.

Interesting reading.

 

Link to comment
Share on other sites

1 hour ago, tater said:

Note that these are from 2016, and they list defeating a best human Go player at years away.

Playing a game that basically just changes the values of stuff is easy.

Getting to a person level would need a quantized leap.

 

Link to comment
Share on other sites

16 minutes ago, YNM said:

Playing a game that basically just changes the values of stuff is easy.

Getting to a person level would need a quantized leap.

The bar is constantly moved. Years ago, it was said that only a human could play chess at the master level. Then after that fell, they said that Go was so much harder that it would take many decades. The same program then learned chess in 4 hours, and beat a purpose built chess app. Poker? Poker is playing the guy across from you, not the cards (because you don't actually play the hand you are dealt, you can bluff to victory). Computers now beat poker players.

Yes, it's easier when the goal is something like a score, which is very easy to characterize. Person level has already been achieved and exceeded on many tasks, it's just the generalist ability that is lacking. Short of AGI, intelligent systems will certainly be disruptive (and arguably already are).

Link to comment
Share on other sites

"Never trust a computer you can't throw out a window".  Steve Wozniak.

And when you can download "iEvilOverlord" onto a iPhone, don't trust that either.  Or perhaps trust the hardware (because you can kick it if it is bad), but not the software.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...