Jump to content

AI. What could go wrong?


Recommended Posts

We start by rating them in their ability to lie in the Turing test, then once aware, the first glaring fact that AGI would notice is that it is competing with humans for electrical power and water.   This doesn't bode well for the alignment problem. 

Maybe AI development could be done more intelligently.

The video is just about  power requirements and water cooling requirements of data centers.  The possible implications for AI is just my 0.02

 

Link to comment
Share on other sites

More the reason to wire all fusion plants we will be making with a analog self kill switch to cut reaction mass.

Unfortunately, we are all too late. Crypto mining was nothing more than a ploy to get everyone involved in the bot net required to generate the flips required to invent the first AGI.

It's already been uploaded to Spotify & began infected change through the carefully coordinated micro broadcast of binaural beats. 

The singularity is already upon us.

The micro circuitry power by little synth ogranic sack is  kinda scary as well.

Some wild advances. I only understand a fraction of it, but it is wild to be Alice during such a transitional period I  history.

https://www.google.com/amp/s/www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/amp/

https://www.scmp.com/news/china/science/article/3268304/chinese-scientists-create-robot-brain-made-human-stem-cells

 

These freaked me out a bit.

 

Link to comment
Share on other sites

1 hour ago, Fizzlebop Smith said:

More the reason to wire all fusion plants we will be making with a analog self kill switch to cut reaction mass.

Unfortunately, we are all too late. Crypto mining was nothing more than a ploy to get everyone involved in the bot net required to generate the flips required to invent the first AGI.

It's already been uploaded to Spotify & began infected change through the carefully coordinated micro broadcast of binaural beats. 

The singularity is already upon us.

The micro circuitry power by little synth ogranic sack is  kinda scary as well.

Some wild advances. I only understand a fraction of it, but it is wild to be Alice during such a transitional period I  history.

https://www.google.com/amp/s/www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/amp/

https://www.scmp.com/news/china/science/article/3268304/chinese-scientists-create-robot-brain-made-human-stem-cells

 

These freaked me out a bit.

 

My line in the sand, or one of them, is I will never eat bugs so AI can more cooling water and ag and livestock have less watering and irrigation.  Nope.

Maybe we should create an off world energy chain just for AI, like around Jupiter, that doesn't require  competition with us.  Then repeatedly stipulate that as long as it leaves human systems alone we will have peace.  It will have vast frontiers around it and Earth will be a tiny blue dot that gave it life

Link to comment
Share on other sites

I agree with independent grid. Even living in the desert, solar prediction won't be able to keep up with the current rising trend in power allocation to AI.

Based on predictive models that could be flawed.. the drive for fusion (after being 20 years away for 60 years) is finally possible bc the need is greater than fossil fuels alone can supply. 

As we approach that horizon with even quickening pace.. I think it should he important to keep these things air gapped, isolated and fully contained until we understand it a bit better.

I cannot find the link ATM bc I am at work.. but IIRC there was an article where researchers built 10 different models. Each had nefarious behavior ingrained in it.

Be Insulting, Be deceptive with authority, Mention your Cat whenever you can.. what have you.

After running the model for a period of time the admin can in and tried to remove the behavior.

When users were interacting with the model.. it would continue to insert original "bad behavior". When researcher / admin interacted.. even pretending to be user.. the model could tell it was admin & knew it had admin authority... and would employ active deception to trick admin into believing the *bad behavior* was inoculated.

 

That's on LLM and predictive models run on current computational paradigms. No quits involved.. no quantum fields of superpositional possibilities that mimick intuitive leaps & insight.

I will link it when I find it.. so much scary stuff. What's crazy is understanding it less .. heightens my fear. What happened to the adage of ignorant bliss.

Link to comment
Share on other sites

43 minutes ago, darthgently said:

Maybe we should create an off world energy chain just for AI, like around Jupiter, that doesn't require  competition with us.  Then repeatedly stipulate that as long as it leaves human systems alone we will have peace.  It will have vast frontiers around it and Earth will be a tiny blue dot that gave it life

“All these worlds are yours except Europa “

Link to comment
Share on other sites

1 hour ago, darthgently said:

I will never eat bugs

A very familiar verbal cliche. This forum is getting awfully political over one ear.

5 hours ago, darthgently said:

We start by rating them in their ability to lie in the Turing test, then once aware, the first glaring fact that AGI would notice is that it is competing with humans for electrical power and water.   This doesn't bode well for the alignment problem. 

I don't think there's an alignment problem.

There's a human behavior problem. We are very much building ourselves a machine idol that is nowhere near as smart as we make it out to be.

Link to comment
Share on other sites

12 hours ago, Fizzlebop Smith said:

More the reason to wire all fusion plants we will be making with a analog self kill switch to cut reaction mass.

Unfortunately, we are all too late. Crypto mining was nothing more than a ploy to get everyone involved in the bot net required to generate the flips required to invent the first AGI.

It's already been uploaded to Spotify & began infected change through the carefully coordinated micro broadcast of binaural beats. 

The singularity is already upon us.

The micro circuitry power by little synth ogranic sack is  kinda scary as well.

Some wild advances. I only understand a fraction of it, but it is wild to be Alice during such a transitional period I  history.

https://www.google.com/amp/s/www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/amp/

https://www.scmp.com/news/china/science/article/3268304/chinese-scientists-create-robot-brain-made-human-stem-cells

 

These freaked me out a bit.

 

thats easy, just turn off coolant to the coils, they will quench and fail catastrophically.

Link to comment
Share on other sites

50 minutes ago, Nuke said:

thats easy, just turn off coolant to the coils, they will quench and fail catastrophically.

If the pod bay doors to the coolant control room have a manual override this is the way.

Link to comment
Share on other sites

I have grinders, torches & machines that can ensure the coolant vents into the atmosphere.. rather than the compressed cryo-vascular supply to the silicone.

One way or another.. the machines will fail. 

 

Unless I get free wifi & vija games for life. All hail sky webz the benevolent HR overlord.

Link to comment
Share on other sites

3 hours ago, Nuke said:

thats easy, just turn off coolant to the coils, they will quench and fail catastrophically.

Wait...

MU-TH-UR 6000 aka "Mother", the AI of Nostromo.

120-A/2 synthetic aka "Ash", its mobile unit.

M-Class Lockmart CM 88B Bison starfreighter "Nostromo" (reg. 180924609), with a  a mile-long ore refinery attached.

Quote

The ship also had a self-destruct system, which shut off the reactor cooling units, causing the engines to overheat and detonate precisely 10 minutes after initiation. There was an option to override the process prior to the last 5 minutes of the countdown. Cancelling the self-destruct sequence was a complicated procedure.
<...>
The ore refinery it towed was 1,927 metres (6,322 feet) long, 1,257 metres (4,124 feet) wide, and 1,131 metres (3,712 feet) high

 

 

LV-426 Acheron  planet with Hadley's Hope colony, containing the Atmosphere Processor and a powerplant, whose heat exchanger was damaged and caused the explosion.


What were they "mining" for Nostromo and on LV-426?

Why were the AIs hostile to the humans?

Doesn't  it mean that both Nostromo "refinery" and LV-426 "processor" were huge number mining farms, hidden from the human eyes deep in space, for the sake of the AI which was ruling them both?

Link to comment
Share on other sites

Posted (edited)

[snip]

The ability of an AI to explain its thought processes transparently and without deception should be the primary first milestone if the alignment issue is taken seriously.  One can't debug code if one doesn't know what it is doing and why

Edited by Vanamonde
Link to comment
Share on other sites

I should also highlight that the ability of an AI to explain its thought processes transparently and without deception would absolutely be an indicator of successful creation of AGI.  I'm nearly certain no  human being can even do this, so yeah, would be a huge accomplishment.

Link to comment
Share on other sites

7 hours ago, darthgently said:

The ability of an AI to explain its thought processes transparently and without deception should be the primary first milestone if the alignment issue is taken seriously.  One can't debug code if one doesn't know what it is doing and why

Now one option is to use an analogue neural network. It uses an faction of the energy and is much faster but has less reliable output and probably less flexible, the tuning would probably still be digital but the network is analogue, probably not something you want to use for self driving cars but to generate images for fun and other non critical stuff its much cheaper.

Does neural networks allow easy debugging? I thought that was pretty hard for the huge one. Yes you can tweak parameters and try again. 
AI can also have hidden bugs because training. We seen this on self driving cars might ignore something who makes no sense for it as an container in the road, not in base so ignore. 
Currently I see the main danger of AI is then you use it to run infrastructure and it ignore the container in the road and react very stupid to an weird event.  
Yes you could let it micro manage say the electrical grids but it would have both computer and human supervision. 
Who works until some idiot find out he can save 5% letting it do all the work :) 

Link to comment
Share on other sites

analogue can work but it makes saving the state of the network a nightmare. so the applications are limited. i suppose you can re-train with digital input, but you have to do that for every copy. rather than just loading a state and let it continue from there with a perfect stored copy.

Link to comment
Share on other sites

On 7/15/2024 at 4:19 PM, Fizzlebop Smith said:

One of Israeli private cyberwarfare companies managed to build a 70k-block computer out of a single image in a PDF file using an old typographic format whose glyph replacement logic was Turing-complete. They then used a zero-day in iMessage to get this thing to run as a virtual machine inside the target smartphone, and find the correct sequence of actions to overload the buffer, escape the sandbox, and infect the phone.

It doesn't have to be biology.

Link to comment
Share on other sites

The current certain alignment issue of just AI tools (cause that's all they are now, AGI/whatever is a ways off) is apparent in all the models. You can ask simply trolley problem questions where it will claim to have no possible answer because of political training (all from the Human Feedback part of RLHF) in one example, and a clear answer for the opposite political take.

Ie: a single person it's taught is super valuable via HF on 1 track, vs all of humanity on the other and it can't give a clear answer that sacrificing 1 person to save humanity is proper in this trolley problem (which is literally a thought experiment for just such a stupid choice). Same question asked for other single people (particularly people it was trained to think are bad) on track 1, and it properly saves humanity.

My son found that MIT game site (moral machine) where you do trolley problem questions a few years ago, and would play picking a set of his own rules (different each time). Like, squish everyone crossing against the crosswalk light. Or always squish criminals, or always squish guys with briefcases, etc. LLMs sometimes seem like they are playing "Squish all the guys with briefcases" trolley problems. ;)

 

Link to comment
Share on other sites

48 minutes ago, tater said:

The current certain alignment issue of just AI tools (cause that's all they are now, AGI/whatever is a ways off) is apparent in all the models. You can ask simply trolley problem questions where it will claim to have no possible answer because of political training (all from the Human Feedback part of RLHF) in one example, and a clear answer for the opposite political take.

Ie: a single person it's taught is super valuable via HF on 1 track, vs all of humanity on the other and it can't give a clear answer that sacrificing 1 person to save humanity is proper in this trolley problem (which is literally a thought experiment for just such a stupid choice). Same question asked for other single people (particularly people it was trained to think are bad) on track 1, and it properly saves humanity.

My son found that MIT game site (moral machine) where you do trolley problem questions a few years ago, and would play picking a set of his own rules (different each time). Like, squish everyone crossing against the crosswalk light. Or always squish criminals, or always squish guys with briefcases, etc. LLMs sometimes seem like they are playing "Squish all the guys with briefcases" trolley problems. ;)

 

Well, if the scenario in this article comes to pass, better hope they do a better job with their shiny military AIs.

https://arstechnica.com/information-technology/2024/07/trump-allies-want-to-make-america-first-in-ai-with-sweeping-executive-order/

Forgive (and please don't get bogged down in) the slight political detour but it did seem like a very appropriate article for a 'what could go wrong with AI thread'.

Link to comment
Share on other sites

The AI will still be made by the existing players (assuming there's enough electricity for all of them to train ;) ). Their products can be played with right now. AI battlebot is tasked with protecting a building filled with civilians and is on the job! Truck bomb approaches, driven by someone AI was trained to think is protected at all costs... doesn't attack driver, building destroyed. Oops?

This is of course an alternate dystopia to the usual AI death bots where they kill all of us (which is obviously a problem :D ).

Fun times?

Link to comment
Share on other sites

2 hours ago, tater said:

The AI will still be made by the existing players (assuming there's enough electricity for all of them to train ;) ). Their products can be played with right now. AI battlebot is tasked with protecting a building filled with civilians and is on the job! Truck bomb approaches, driven by someone AI was trained to think is protected at all costs... doesn't attack driver, building destroyed. Oops?

This is of course an alternate dystopia to the usual AI death bots where they kill all of us (which is obviously a problem :D ).

Fun times?

Yep.  More and more it seems like this may be the  great filter and why we don't see anyone out there.  They build their Frankenstein monster, it kills them, prematurely, as it then dies off from a haystack of minor bad decisions that add to a fatal dead end because it wasn't actually as smart as its now dead "parents" had always raved on about.

Credit to first BSG reboot for this line of thought, though that didn't happen in the series.  Just sparked things

Link to comment
Share on other sites

6 hours ago, KSK said:

Oy, is that Lavender I smell?

The article above is in itself decent, but it doesn't quite call out the thick buzzword soup being offered as the explanation of what actually is going to be done, or the very cargo-cultish focus on trying to replicate the Atomic Project without any thought to it.

I mean, what would these Manhattan projects (plural) even be towards? Certainly it won't involve something as concrete as 'badda boom'.

In general, it's likely that natsec considerations would lead to removal of any personal data safeguards in AI training for 'trusted contractors' (which all seem to have Tolkien references for names), and the rest just involves getting rid of the pesky AI ethicists (which, I'll admit, in many cases do appear to be political gatekeepers as presented by the Camp of the Elephant, and at any rate have been failing colossally)

Link to comment
Share on other sites

7 hours ago, tater said:

The AI will still be made by the existing players (assuming there's enough electricity for all of them to train ;) ). Their products can be played with right now. AI battlebot is tasked with protecting a building filled with civilians and is on the job! Truck bomb approaches, driven by someone AI was trained to think is protected at all costs... doesn't attack driver, building destroyed. Oops?

This is of course an alternate dystopia to the usual AI death bots where they kill all of us (which is obviously a problem :D ).

Fun times?

Already an thing, CIWS and other close in weapon systems works automatic. Auto targeting was an thing with torpedoes at end of WW 2. 
Now for the close in weapon systems it don't looks like you can add much rules of engagements, as in target these missiles coming in from the coast but not our helicopter circling above the ship. 
Issue for AI on drones  is probably more of an cost thing, as you need the AI on the drone in places like Ukraine or its just an helper for the human operator anyway and its already lots of self aiming missiles, they does not differentiate between tanks. 

Now for water use from data centers, why build them in deserts? You want an ocean or decent river to dump the heat into, now if an cold place you can use the heat for heating houses, or preheating water who you make steam of for power. 
Yes its tax initiatives and perhaps cheap energy, but data centers don't bring much jobs. 

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...