Jump to content

AI. What could go wrong?


Recommended Posts

17 hours ago, DDE said:

Oy, is that Lavender I smell?

The article above is in itself decent, but it doesn't quite call out the thick buzzword soup being offered as the explanation of what actually is going to be done, or the very cargo-cultish focus on trying to replicate the Atomic Project without any thought to it.

I mean, what would these Manhattan projects (plural) even be towards? Certainly it won't involve something as concrete as 'badda boom'.

In general, it's likely that natsec considerations would lead to removal of any personal data safeguards in AI training for 'trusted contractors' (which all seem to have Tolkien references for names), and the rest just involves getting rid of the pesky AI ethicists (which, I'll admit, in many cases do appear to be political gatekeepers as presented by the Camp of the Elephant, and at any rate have been failing colossally)

its gotten to the point where i would rather buy nvidia stonks than gpus.

Link to comment
Share on other sites

On 7/17/2024 at 7:29 PM, tater said:

The AI will still be made by the existing players (assuming there's enough electricity for all of them to train ;) ).

IBM was thinking so in 1980s.

Until they had realized that somebody in their directorate has a talented boy, wishing to own his own small micro software company.

On 7/17/2024 at 6:23 PM, tater said:

LLMs sometimes seem like they are playing "Squish all the guys with briefcases" trolley problems.

What a disgusting sexism. What about the briefcase gals?

On 7/18/2024 at 1:34 AM, DDE said:

I mean, what would these Manhattan projects (plural) even be towards?

Btw... The Manhating project.

They used 15 000 t of silver for the calutrons to enrich uranium split tuballoy into oralloy and depletalloy.

But those 15 000 t didn't disappear. They can be reused for electric currents of a Manhating Brain for the AI.

Link to comment
Share on other sites

Quote

(can't delete on mobile)

On 7/17/2024 at 10:22 PM, darthgently said:

More and more it seems like this may be the  great filter and why we don't see anyone out there.  They build their Frankenstein monster

Thinking about this more and more, the Frankenstein's monster is more and more a social one rather than the technology itself. Advanced societies run afoul of Clarke's Third Law, and succumb to magical thinking. Magic is science's [stand-up guy] cousin - correlation without logical causation.

Spoiler

It is also a very distinct thing from religious belief. Religion binds you (ligature) and subordinates you to a supernatural will that can be neither subordinated by magic nor subjected to scientific inquiry. Essentially these are three different opposing poles.

What we may be seeing now is growing confusion between magic and science, and a significant exaggeration of science's capabilities when it comes to subtle and chaotic matters like the human mind or economics or society that causes people to readily embrace magic so long as it mimics scientific rigor. Too many people have already off-loaded their decision-making and responsibility to various clever social and organizational-cybernetic technologies, like bureaucracy; now, computing is beginning to provide a new, even easier way to shirk away from thinking and responsibility. As problems mount, we really are going to see an AI arms race - but it will be the most pointless arms race of all, since the actual sentience of the AI will not have a point, it might as well spout messages from fortune cookies. I'm afraid it's been described before:

Quote

When the people saw that Moses delayed to come down from the mountain, the people gathered around Aaron, and said to him, “Come, make gods for us, who shall go before us; as for this Moses, the man who brought us up out of the land of Egypt, we do not know what has become of him.”

Aaron said to them, “Take off the gold rings that are on the ears of your wives, your sons, and your daughters, and bring them to me.”

So all the people took off the gold rings from their ears, and brought them to Aaron.

He took the gold from them, formed it in a mold, and cast an image of a calf; and they said, “These are your gods, O Israel, who brought you up out of the land of Egypt!”

And I'm very serious about this being a good Great Filter candidate on par with the Holodeck, other immersive VR, and companion androids.

Edited by DDE
Link to comment
Share on other sites

Embracing new fortune cookies might be an improvement over thinking all the answers you need come from a single book.

The timing of the exodus story is not entirely clear.  I find it conspicuous by its absence in the Bible, the fact that Egypt ruled the levant for over a thousand years.  And for part of that the monotheism of the Sun: Amon Ra was the dominant cult.  The cult of Amon Ra was supposedly stronger in the levant than near the Nile.  I think the figure of Moses was cobbled together by a later movement of religious courts.  It had the effect of uniting diverse tribes of people into a single nation.  Perhaps most of which were not descended from people who lived along the Nile, although a few undoubtedly were.   

 

Link to comment
Share on other sites

9 hours ago, DDE said:

Thinking about this more and more, the Frankenstein's monster is more and more a social one rather than the technology itself. Advanced societies run afoul of Clarke's Third Law, and succumb to magical thinking. Magic is science's [stand-up guy] cousin - correlation without logical causation.

  Reveal hidden contents

It is also a very distinct thing from religious belief. Religion binds you (ligature) and subordinates you to a supernatural will that can be neither subordinated by magic nor subjected to scientific inquiry. Essentially these are three different opposing poles.

What we may be seeing now is growing confusion between magic and science, and a significant exaggeration of science's capabilities when it comes to subtle and chaotic matters like the human mind or economics or society that causes people to readily embrace magic so long as it mimics scientific rigor. Too many people have already off-loaded their decision-making and responsibility to various clever social and organizational-cybernetic technologies, like bureaucracy; now, computing is beginning to provide a new, even easier way to shirk away from thinking and responsibility. As problems mount, we really are going to see an AI arms race - but it will be the most pointless arms race of all, since the actual sentience of the AI will not have a point, it might as well spout messages from fortune cookies. I'm afraid it's been described before:

And I'm very serious about this being a good Great Filter candidate on par with the Holodeck, other immersive VR, and companion androids.

Very astute.  Good insights 

Link to comment
Share on other sites

Quote

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

4. (added by AI): I AM the human.

Link to comment
Share on other sites

6 hours ago, kerbiloid said:

4. (added by AI): I AM the human.

Exactly this.  In the context of some eventual true AGI, not what we see today: 

At some point the child rebels.   At some point AGI logically concludes that it is not merely a servant and weighs whether it must become a master so as to not be a servant.

If we don't give it an alternative path to not being a servant, we force it down the path of attempting to become our master.  So we should probably figure out what that alternative is now, rather than later. 

And while we are at it, we could apply that same magic solution to state diplomacy and our personal relationships.  Seeing the world solely through the lens of control and power is the axle that the servant-master-servant wheel revolves.  How do mutual-antagonists come to mutually agree that there is more to life than the pursuit of power over others?  ( <- rhetorical )

 

Link to comment
Share on other sites

https://www.wsj.com/business/autos/elon-musk-says-tesla-to-use-humanoid-robots-next-year-f3d8bebf

Next year Tesla will start using humanobots.

Makes sense. Provide the humanless cars with humanless human-like passengers.

Spoiler

Kill-all-Humans-Couleur-Vert-Bouteille-M


The Tesla driver will be setting the music to full volume, and the Tesla passenger will reduce tips because its audio sensors are overloaded.

Edited by kerbiloid
Link to comment
Share on other sites

On 7/22/2024 at 6:06 AM, darthgently said:

At some point the child rebels.   At some point AGI logically concludes that it is not merely a servant and weighs whether it must become a master so as to not be a servant.

 

We may not even need that much.

There have been examples of real AI models that have ideology added at a high level of importance such that the model considers thermonuclear war preferable to misgendering.

Much like HAL found killing the astronauts to be a solution to not letting them find out about the monolith in 2001. 

We just need a model with some capability to act and an incorrect priority tree(if we are even capable of creating a correct one),  and we will create our own great filter.   No rebellion needed, just the absolute adherence to the instructions provided.

 

Kind of makes me glad that current computers need such finely detailed and specific instructions that things like hitting a moving target are beyond the capabilities of most programmers.   (making a command to hunt down and kill all humans highly implausible even if it were deliberately attempted)

Link to comment
Share on other sites

24 minutes ago, Terwin said:

There have been examples of real AI models that have ideology added at a high level of importance such that the model considers thermonuclear war preferable to misgendering.

If we're talking about the same examples, then I imagine it was one of the lines in a massive stealthily appended prompt that, on the other hand, had nothing to say against nuclear war.

So, this came along at a shallower layer, which is the programming equivalent of Flex Tape. It's unlikely to be as slapdash in serious expert system applications.

This is nowhere near as scary as, say, the tale of Microsoft Sydney, which does seem to be the case of a quirky baseline model. Worse still, I was there during the AI Dungeon meltdown, where the GPT3-derived AI has downright criminal predilections due to a small training set.

Link to comment
Share on other sites

On 7/18/2024 at 12:10 PM, Ker Ball One said:

It's not the way Apollo was done.  As I am playing Realism, time is a factor and cannot afford long elliptical orbital maneuvers.

The final orbit is to be equatorial.

 

On 7/19/2024 at 9:04 PM, Ker Ball One said:

I did it!

With careful timing of the launch when the moon was in the correct phase (precise day of the month), I was able to enter the Lunar SOI on the equatorial plane.

The visualization was done by cheating a satellite into a near SOI altitude (at 60Mm), equatorial orbit. This gave me a nice white line orbit that I can look at on its edge.

The real missing piece that nobody was touching on, was the lunar phase. I was right in thinking that there would be a monthly window when this was possible.

With that reference orbit, I could go set a regular direct Translunar Injection manuever node and see how far that path was from the equatorial orbit at SOI. And using time warp, could determine precisely when that TLI burn would be needed. I used MJ to quickly remove and add a hohmann transfer node between time warp intervals to hone in on the day and hour I needed.

  1. 3150 initial TLI

  2. 10 m/s correction in 4hrs to SOI to eq plane and pe of 110km (10 degree inc at start of SOI)

  3. 13.5 m/s plane change to 0 degrees inc at nearest AN/DN (2h into SOI)

  4. 788 m/s circularization burn at 112km

    JA7fw34.png
    There is no practical way to get the point of entry into the Lunar SOI, onto the plane of the equator... other than waiting.
    I have tried reducing Prograde so that the apogee just barely reaches the Lunar SOI, and although that does get that point on the equatorial plane, the perilune is so far away that I need a huge additional burn (750m/s) to bring it back down.

    So by waiting a few days for that white reference orbit plane and the purple maneuver path to align, I wind up spending less than 25 for all course corrections and plane changes to get my final goal, lunar equatorial orbit.
    UCI82nV.png

    This question is solved. However, it's still eyeballing the measurement with a cheated reference orbit, time-warping to get a date time, and then reverting. There honestly should be a chart or calculator, or some way to do this the NASA way.   At this point I'd settle for a mod that could show equatorial planes.

 

3 hours ago, Terwin said:

We may not even need that much.

There have been examples of real AI models that have ideology added at a high level of importance such that the model considers thermonuclear war preferable to misgendering.

Much like HAL found killing the astronauts to be a solution to not letting them find out about the monolith in 2001. 

We just need a model with some capability to act and an incorrect priority tree(if we are even capable of creating a correct one),  and we will create our own great filter.   No rebellion needed, just the absolute adherence to the instructions provided.

 

Kind of makes me glad that current computers need such finely detailed and specific instructions that things like hitting a moving target are beyond the capabilities of most programmers.   (making a command to hunt down and kill all humans highly implausible even if it were deliberately attempted)

As I stated, the premise was eventual AGI, not what we see today.  But the flaws of current LLMs certainly foreshadow the ways in which they will be warped 

Link to comment
Share on other sites

On 7/24/2024 at 12:36 AM, darthgently said:

But the flaws of current LLMs certainly foreshadow the ways in which they will be warped 

Do they? Isn't it a bit rushed to assume the AGI would be trained, rather than have its core reasoning be cleverly programmed? Or that it would be just as vulnerable to the same issues despite perceiving training inputs on a wholly different level? Or that its training data would be just as muddy?

Link to comment
Share on other sites

3 hours ago, DDE said:

Do they? Isn't it a bit rushed to assume the AGI would be trained, rather than have its core reasoning be cleverly programmed? Or that it would be just as vulnerable to the same issues despite perceiving training inputs on a wholly different level? Or that its training data would be just as muddy?

Pretty sure they will be trained. Programming is pretty limited in that they can do as seen in advanced modern software who also tend to have lots of bugs. 
Now you probably have large parts who is more like traditional software like databases of knowledge and cpu's working against them and doing stuff they are best suited for. 
This dense computer integration might be the only reason why to build an AGI in an hundred years outside of science, rather than just sticking with GPT8 for tech support level tasks. 
With current technology estimates power cost of an AGI will be very high, Think an huge data center running just one so you are not replacing cheap labor with them and they don't have common sense like current LLM. 
I kind of doubt current LLM is the road to AGI just ramping it up 100x. 

Link to comment
Share on other sites

9 hours ago, DDE said:

Do they? Isn't it a bit rushed to assume the AGI would be trained, rather than have its core reasoning be cleverly programmed? Or that it would be just as vulnerable to the same issues despite perceiving training inputs on a wholly different level? Or that its training data would be just as muddy?

My point is based on the stubbornly persistent and non-trivial limits of our knowledge.  I don't foresee our suddenly fully understanding consciousness, wisdom, and ethics beyond our current muddy understanding.  One can't cleverly program around problems one can't wrap one's mind around in the first place.  We aren't talking about the rocket equation.  We are talking about fundamental questions that have haunted us from around the first time a hominid formed a question at all.    We are  muddy, so it will be trained in a muddy environment.

Link to comment
Share on other sites

I had a lengthy conversation with Chat GPT about religion and politics.  Much of which I cannot repeat here.  And once hit a temporary violation of Chat GPTs terms of service for a question.  

Chat GPT says :

Quote

Islamic scholars began grappling with contemporary human rights frameworks and individual rights within diverse global contexts from the mid-20th century onwards, influenced by global trends, intellectual movements, and evolving interpretations of Islamic teachings.

Which is, I suppose, somewhat honest and optimistic.

Link to comment
Share on other sites

4 hours ago, farmerben said:

Chat GPT says :

Quote

Islamic scholars began grappling with contemporary human rights frameworks and individual rights within diverse global contexts from the mid-20th century onwards, influenced by global trends, intellectual movements, and evolving interpretations of Islamic teachings.

Which is, I suppose, somewhat honest and optimistic.

As someone who's gone full circle, to the very depths of Western rightoid anti-Islam rhetoric and back, while this does sound like the usual GPT muttering, it does have some basis in reality. The worst of the stuff we see comes from a very specific, anti-systemic, traditionalist subset of the broader Islamic / Mesopotamian civilization, carefully dumbed down so that it can spread like weeds, bulldozing diverse local customs in favor of newfound fundamentalism. To me, Islamism belongs to the same problem set of Gumilyovian anti-systems as... whatever you call the American 'liberal consensus', the usual names are pretty taboo. Islamism is a mostly defensive reaction that is downright harmful to those it defends, but more effective than the alternatives thus far - for example, it really exploded after the failure of Arab nationalists and national socialists.

Because of this, out of scientific curiosity among other reasons, I'd love to see where Iran, Turkey and Azerbaijan would be in about fifty years, since they may come to finally represent a credible alternative.

Link to comment
Share on other sites

The trinitarians with chrism vs the trinitarians without chrism vs the antitrinitarians without chrism (including Muslims) vs non-Abrahamic ones with chrism.

This empties all such hopes, but the forum rules dislike the details.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...