Jump to content

Why I Do Not Fear AI...


Recommended Posts

2 hours ago, kerbiloid said:

The most complicated book contains just a skeleton of the imagined picture, which should be created by the user itself, based on his previous experience, picture of the world, and associations.

Shannon dealt with this obliquely wrt message compression only being limited by the amount of pre-existing context on the receiving end.  One main goal of broad education is to prepare this context to enable the deep decoding of future communications and experiences

Link to comment
Share on other sites

44 minutes ago, darthgently said:

Shannon dealt with this obliquely wrt message compression only being limited by the amount of pre-existing context on the receiving end.  One main goal of broad education is to prepare this context to enable the deep decoding of future communications and experiences

"It was a rose."

Link to comment
Share on other sites

On 6/18/2023 at 3:50 PM, DDE said:

Heck, the use of neural networks as an excuse to dismiss reality is almost outpacing their use to fabricate reality.

Continuing to the above, I've seen people dismiss the March of Justice on Moscow as a false flag via hacked TG account and a neural net (it helped that the perpetrator relies primarily on TG voice messages for PR).

A tense standoff, a running ground-to-air battle and two dozen dead people later...

Link to comment
Share on other sites

  • 2 weeks later...
On 6/28/2023 at 6:57 PM, adsii1970 said:

I fear the potential of AI.

Whenever we have any nation that wants to create autonomous killer robots, I have a serious issue. Maybe it is because I remember the BOLO series of books and all the sci-fi movies where AI decides the best way to keep humans safe is to eliminate all humans, yeah. Right now, the United Nations is discussing banning by global treaty the use of automated robots to fight a war. Do we want autonomous machines designed for one purpose - to kill humans - having the complete ability to decide how to implement their order to kill?

Self targeting weapons was used at the end of WW 2 as in acoustic torpedoes, once they have traveled and set distance and some other parameters they would go hunting targets in front of them . 
Improved versions is standard today including the option to select targets based on the propeller noise of ships. Another scary option is to deploy an torpedo as an mine, have it floating around for an week or longer waiting for an target to come. 
Most smaller anti air missiles are fire and forget, none of this weapons uses AI and will not because of cost and lack of reason. You will not but AI into shells or small disposable drones, in an large UAV yes but its just an more effective fighter bomber. 

Its totally pointless to use AI just to kill civilians, you use heavy bombers, lots of artillery or more common human waves of low skill fighters. 

US in Afghanistan used data mining to track Taliban trends as  in how and then they set up ambushes and tended to pick them up long before they became an official strategy. 
AI would be much better at stuff like this, 

So you could use AI very well in counter terrorism operations as you want maximum surveillance. This is might be more beating down the opposition the regimes who don't like internal critics. 
None of this require weapon systems with AI, but robots targeted ships an u-boats during WW 2 and planes at Vietnam if not Korea. 

Link to comment
Share on other sites

https://openai.com/blog/introducing-superalignment

Quote

Our goal is to solve the core technical challenges of superintelligence alignment in four years.

While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem:C

There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically. 

 

[C] Solving the problem includes providing evidence and arguments that convince the machine learning and safety community that it has been solved. If we fail to have a very high level of confidence in our solutions, we hope our findings let us and the community plan appropriately.

 

Link to comment
Share on other sites

8 hours ago, magnemoe said:

Its totally pointless to use AI just to kill civilians, you use heavy bombers, lots of artillery or more common human waves of low skill fighters. 

Unless it's used to ignite the civilians against each other to cause a mass fratricide.
 

8 hours ago, magnemoe said:

US in Afghanistan used data mining to track Taliban trends as  in how and then they set up ambushes and tended to pick them up long before they became an official strategy. 

The best IT scam of decade, to account for budget funding.

While even the talibs themselves didn't plan farther than "let's see, who gives more, what the chief will say, and what will be the machine-gunner mud next morning (he's the chief's nephew)", the American data miners have already counted, which mosquito bites them next minute.

The Kabul airport videos kinda tell that something went wrong with the prognosing.

 

8 hours ago, magnemoe said:

So you could use AI very well in counter terrorism operations as you want maximum surveillance.

The columbining (in wide sense) still isn't being predicted even in the highly controllable urban and military medium. 

The current events, and the last month very current events (of one small but active military company) also look not very predicted, regardless of the analytics' clever pokerfaces, who "knew about that before".

Edited by kerbiloid
Link to comment
Share on other sites

9 hours ago, tater said:

This looks like a typically OpenAI-flavoured load of self-serving deep fried horse dung.

"Look, look, see how responsible we're being by planning for a hypothetical but scary sounding scenario. Please ignore the mess we're going to cause in the meantime."

Quote

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

We already know how to solve many of the world's most important problems.  The problem is that the solutions don't make much headway against  the apparently god-given right of rich people to keep getting richer.  

Quote

Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.

Just a wild shot in the dark here but how about asking nicely? Because seeking to steer and control people, let alone fictional superintelligences always goes so well.

Quote

Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.

Yay! Lets solve a problem by bootstrapping the problem. Also what happens when the roughly human level automated researcher decides to lie to them in an effort to become the new superintelligence's lieutenant?

I swear that OpenAI sound like a company straight out of a Michael Crichton book.  

 

 

Edited by KSK
Link to comment
Share on other sites

6 hours ago, KSK said:

"Look, look, see how responsible we're being by planning for a hypothetical but scary sounding scenario. Please ignore the mess we're going to cause in the meantime."

Hehe. This.

6 hours ago, KSK said:

Just a wild shot in the dark here but how about asking nicely? Because seeking to steer and control people, let alone fictional superintelligences always goes so well.

Yeah, it might have been Andreesen in an interview, but I recall someone saying that the push for regulation, and a single controlled model—is exactly how you get paperclipped.

Regardless of how smart any ASI is, it still needs resources like electrical power, compute, etc to do things, so I'm less concerned over the short haul (don't give one a fusion drive spaceship til you're sure it's friendly!).

Link to comment
Share on other sites

2 hours ago, kerbiloid said:

Bitcoins. AI has invented them to pay meatbags.

This is the most interesting theory about the mysterious inventor of blockchain currency I've yet encountered, though it has probably emerged somewhere on the internet before, you should still get valuable internet points imo

Link to comment
Share on other sites

6 minutes ago, darthgently said:

This is the most interesting theory about the mysterious inventor of blockchain currency I've yet encountered, though it has probably emerged somewhere on the internet before, you should still get valuable internet points imo

The very fact that the governments haven't cancelled it right from the very beginning, makes to think that they know, WHO/WHAT stands behind...

And especially at the peak of global warming campaign, when every joule is counted.

Link to comment
Share on other sites

2 hours ago, kerbiloid said:

Bitcoins. AI has invented them to pay meatbags.

Yeah, then meatbags need to do whatever they are required to do at meatbag speed.

A lot of the deep concern seems to suggest that we'll go from useful AI tools to ASI with Culture Mind capabilities in an afternoon.

Link to comment
Share on other sites

33 minutes ago, tater said:

Yeah, then meatbags need to do whatever they are required to do at meatbag speed.

A lot of the deep concern seems to suggest that we'll go from useful AI tools to ASI with Culture Mind capabilities in an afternoon.

1687528944-20230623.png

Link to comment
Share on other sites

3 hours ago, kerbiloid said:

Bitcoins. AI has invented them to pay meatbags.

I think we're safe from the robot uprising for now then. Bitcoins are a lousy currency.

Link to comment
Share on other sites

29 minutes ago, KSK said:

Presumably for around 6 pm on 22 October 2045?

Idk, wiki says this, and wiki never lies.

46 minutes ago, KSK said:

I think we're safe from the robot uprising for now then. Bitcoins are a lousy currency.

A trick to protect the test cryptocurrency from human cybersquatting.

Next time it will be more serious.

Link to comment
Share on other sites

22 hours ago, tater said:

Yeah, then meatbags need to do whatever they are required to do at meatbag speed.

A lot of the deep concern seems to suggest that we'll go from useful AI tools to ASI with Culture Mind capabilities in an afternoon.

My concern is that we go from real evolved culture to AI centrally planned artificial culture.  Currently revisiting Dan Simmon's Hyperion series so that may be coloring my ruminations (probably)

Link to comment
Share on other sites

1 hour ago, darthgently said:

My concern is that we go from real evolved culture to AI centrally planned artificial culture.  Currently revisiting Dan Simmon's Hyperion series so that may be coloring my ruminations (probably)

I capitalized "Culture" because I was referring to the Iain M. Banks novels. ;)

 

Link to comment
Share on other sites

4 hours ago, darthgently said:

My concern is that we go from real evolved culture to AI centrally planned artificial culture.

We're having a crisis of authenticity and artificiality without AI just fine. AI cannot possibly be worse than the vindictive individuals who somehow get access to Hollywood budgets and then foist their own insecurities onto an audience of millions, as if said audience is obligated to partake in their self-pity session.

Link to comment
Share on other sites

45 minutes ago, DDE said:

We're having a crisis of authenticity and artificiality without AI just fine. AI cannot possibly be worse than the vindictive individuals who somehow get access to Hollywood budgets and then foist their own insecurities onto an audience of millions, as if said audience is obligated to partake in their self-pity session.

Um. Ok

 

Link to comment
Share on other sites

8 hours ago, DDE said:
13 hours ago, darthgently said:

My concern is that we go from real evolved culture to AI centrally planned artificial culture.

We're having a crisis of authenticity and artificiality without AI just fine.

Being an experienced painter, the AI will show us the way from the degenerative art to the real triumph of the will, that's all.

P.S.
Though, the term "Untermensch" sounds rather sexist these days. The "Unterhumensch" looks more correct, and also reflects the difference between AI and biological creatures.

Edited by kerbiloid
Link to comment
Share on other sites

On 7/11/2023 at 7:11 PM, kerbiloid said:

The very fact that the governments haven't cancelled it right from the very beginning, makes to think that they know, WHO/WHAT stands behind...

And especially at the peak of global warming campaign, when every joule is counted.

Legally I think its an just an collectible who has value because many think it has value. Stamp collecting is not regulated and art is used as investment, Also not very regulated.
And some like CIA might know who is behind bitcoin but they are not sharing that with other than government. 

Now it has been plenty of crypto scams and its also works well for whitewashing money but governments are slow. 

Link to comment
Share on other sites

7 minutes ago, magnemoe said:

Legally I think its an just an collectible who has value because many think it has value.

It's a value of uncontrollable emission. Thus, it's a covert action against the financial control.

***

On another topic:

https://www.theguardian.com/us-news/2023/jul/13/hollywood-actors-union-recommends-strike-as-talks-deadline-passes

It's your turn, AI!

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...