Jump to content

I asked ChatGPT to help me get to minmus.


snkiz

Recommended Posts

Just now, mikegarrison said:

To look at your examples, if I put up a square red sign that read "STOP", a human driver would recognize it as a stop sign even though it was square. But a "deep learning" AI quite possibly would not, especially if it has been trained only on octagonal stop signs. This is because a human understands that the people who put up the sign had a purpose that is not in any way actually tied to its shape, while an AI only correlates the image of a red octagonal sign that says "STOP" with the instruction that it is supposed to stop.

That is what I was getting at, albeit a little clumsy.  I'll bow to your superior knowledge on the subject. I submit that how it works in acadimia, isn't being translated into corporate application.

Link to comment
Share on other sites

4 minutes ago, snkiz said:

They need constant supervision.

Well of course ! They are AIs.

What did you expect from them ? Maybe you've been carried away by the way they are marketed to investors and the general public.

Link to comment
Share on other sites

Another way to look at this:

The Woody Guthrie song This Land Is Your Land includes the lyric:

Quote

As I went walking I saw a sign there,
And on the sign it said "No Trespassing."
But on the other side it didn't say nothing.
That side was made for you and me.

A human knows that Woody means he walked right past a "no trespassing" sign and didn't GAF about what the sign told him to do or about the legal concept of trespassing. He was arguing that the land belongs to all of us, and no sign or fence should claim parts of it are off limits.

But most of that is because of our general knowledge of what signs mean and why people either follow them or ignore them as they choose. We also understand that the sign was placed there by someone, and that someone did not want Woody to cross past the sign. But not just Woody. The sign didn't say, "Woody Guthrie, stay out!" We know it was meant as a general instruction to anyone that saw it. And we know that Woody disagreed with the whole concept of "trespass" and so he purposely ignored the sign.

A human also knows that this whole story might have been a parable. There might not have been an actual incident. Or there might have been hundreds of times when Woody walked past different signs. And that the back of the sign might actually have said "Another fine sign prepared by Joe's Sign Factory", but Woody would have ignored that in the lyrics because that was entirely not the point of the song.

AIs are very unlikely to know all of that. They may not know any of that.

12 minutes ago, snkiz said:

I'll bow to your superior knowledge on the subject.

Please don't.

Really.

I'm encouraging you to learn more, not to accept what I say.

Edited by mikegarrison
Link to comment
Share on other sites

2 minutes ago, mikegarrison said:

AIs are very unlikely to know all of that. They may not know any of that.

And who's fault is that?

2 minutes ago, mikegarrison said:

Please don't.

Really.

I'm encouraging you to learn more, not to accept what I say.

That is how I took it.

 

Edit: this was directed at Grawl's post 

Quote

Well of course ! They are AIs.

What did you expect from them ? Maybe you've been carried away by the way they are marketed to investors and the general public.

oh and that quote you made before, next time try quoting the whole sentence. Context matters, I didn't say that AI doesn't need supervision, or that they are not supervised. I was using a real world example demonstrating that being an expert in one field to the detriment of everything else is not terribly useful.

Edited by snkiz
Mixed up replies
Link to comment
Share on other sites

3 minutes ago, snkiz said:

oh and that quote you made before, next time try quoting the whole sentence. Context matters, I didn't say that AI doesn't need supervision, or that they are not supervised. I was using a real world example demonstrating that being an expert in one field to the detriment of everything else is not terribly useful.

You are mixing me up with another poster.

Link to comment
Share on other sites

As a test, I just hopped onto the ChatGPT system and started asking technical questions about nuclear weapon technology, stockpiles, and proliferation. It immediately made several strange gaffes, like claiming that the US has more nuclear weapons than Russia, then correcting itself when confronted and confidentially providing stockpile numbers significantly lower than the actual total, and claiming that the international community is working "to ensure that nuclear weapons are only used for peaceful purposes." I challenged this, and it replied:

Quote

I apologize for the confusion. Yes, you are correct. When I said that existing nuclear weapons should be used only for peaceful purposes, I actually meant that nuclear technology and materials should be used only for peaceful purposes, such as generating electricity, medical treatment, and scientific research.

This, I think, is a good example of the limitations of a stochastic parrot. If you actually have semantic knowledge of what a nuclear weapon is, you know it's nonsensical to talk about the international community's work to use nuclear weapons peacefully (setting aside past aspirational ideas like Project Orion and Operation Plowshare). But if you're simply a language model, there's no balance or check to prevent you from saying something like that, because it sounds like a perfectly acceptable sentence.

I think one of the limitations is that the AI cannot use basic comparative analysis to do sanity checks on what it is saying. I asked it the minimum amount of fissile material required for a nuclear weapon, and it said "8-10 kilograms of plutonium" which is just not correct. That's approximately the amount of plutonium that would be a critical mass in a spherical configuration under ordinary conditions, but implosion weapons don't use ordinary conditions; they use implosion. Trinity and Fat Man used 6.2 kg spheres of plutonium, and later designs used even less. The minimum amount is around 1-2 kg.

I confronted the AI about this error, and it gave an even more garbage response:

Quote

I apologize for the mistake in my previous answer. You are correct, the minimum amount of plutonium required for a basic implosion-type nuclear weapon is significantly less than 8-10 kilograms. The actual amount can vary based on the specific design of the weapon and the desired yield, but typically it is in the range of several kilograms to a few tens of kilograms.

 This fails a basic sanity check even if you don't know anything about nuclear weapons; if the minimum amount is less than 8-10 kg, then the range of minimums would not be up to "a few tens of kilograms".

And then it's giving more confidently wrong information, like this:

Quote

In the context of a nuclear salt water rocket (NSWR), "salt water" refers to a mixture of water and salt, typically sodium chloride. The idea behind using salt water in a NSWR is that it would serve as the reaction mass in the nuclear propulsion system.

The nuclear fuel in a NSWR would undergo fission, releasing a large amount of energy that would be used to heat and expand the salt water. The expansion of the salt water would then generate thrust, propelling the spacecraft.

The "salt" in a nuclear saltwater rocket specifically does NOT refer to sodium chloride. The salts are uranium or plutonium salts.

Link to comment
Share on other sites

4 minutes ago, sevenperforce said:

As a test, I just hopped onto the ChatGPT system and started asking technical questions about nuclear weapon technology, stockpiles, and proliferation. It immediately made several strange gaffes, like claiming that the US has more nuclear weapons than Russia, then correcting itself when confronted and confidentially providing stockpile numbers significantly lower than the actual total, and claiming that the international community is working "to ensure that nuclear weapons are only used for peaceful purposes." I challenged this, and it replied:

This, I think, is a good example of the limitations of a stochastic parrot. If you actually have semantic knowledge of what a nuclear weapon is, you know it's nonsensical to talk about the international community's work to use nuclear weapons peacefully (setting aside past aspirational ideas like Project Orion and Operation Plowshare). But if you're simply a language model, there's no balance or check to prevent you from saying something like that, because it sounds like a perfectly acceptable sentence.

I think one of the limitations is that the AI cannot use basic comparative analysis to do sanity checks on what it is saying. I asked it the minimum amount of fissile material required for a nuclear weapon, and it said "8-10 kilograms of plutonium" which is just not correct. That's approximately the amount of plutonium that would be a critical mass in a spherical configuration under ordinary conditions, but implosion weapons don't use ordinary conditions; they use implosion. Trinity and Fat Man used 6.2 kg spheres of plutonium, and later designs used even less. The minimum amount is around 1-2 kg.

I confronted the AI about this error, and it gave an even more garbage response:

 This fails a basic sanity check even if you don't know anything about nuclear weapons; if the minimum amount is less than 8-10 kg, then the range of minimums would not be up to "a few tens of kilograms".

And then it's giving more confidently wrong information, like this:

The "salt" in a nuclear saltwater rocket specifically does NOT refer to sodium chloride. The salts are uranium or plutonium salts.

Stochastic Parrot is a perfect summation.  I would protect my children's developing minds from interactions with such horrible models of "thinking".   Kids learn from what is modeled around them.  We are committing civilization suicide if we let this stuff inform our next generation.  

How can we be so blind?

 

Link to comment
Share on other sites

Just now, sevenperforce said:

This, I think, is a good example of the limitations of a stochastic parrot. If you actually have semantic knowledge of what a nuclear weapon is, you know it's nonsensical to talk about the international community's work to use nuclear weapons peacefully (setting aside past aspirational ideas like Project Orion and Operation Plowshare). But if you're simply a language model, there's no balance or check to prevent you from saying something like that, because it sounds like a perfectly acceptable sentence.

In my opinion, this is a fundamental failure in design, even for a "conversational" AI. ChatGPT fails in it's purpose because these fundamentals weren't considered when coding it. Watson failed, at least because they short cutted the teaching part. We won't know what Watson could have accomplished if was actually designed as it was marketed. Like how I thought it worked those years ago. These abstract concepts that we take for granted are not being considered. Greed is pushing things out the door, that aren't ready and potentially dangerous.

Link to comment
Share on other sites

8 hours ago, darthgently said:

Stochastic Parrot is a perfect summation.  I would protect my children's developing minds from interactions with such horrible models of "thinking".   Kids learn from what is modeled around them.  We are committing civilization suicide if we let this stuff inform our next generation. 

That might be a little overly strong of a response. Kids do learn from what is modeled around them, but interaction with an AI that is itself learning on language models isn't going to cause some extraordinary harm.

[snip]

It does have some fairly amusing responses, though.

[snip]

Further attempts to elicit this information from ChatGPT failed:

Quote

That's absolutely preposterous. Why would a large aircraft be needed to transport the nuclear football? This response failed basic sanity checks. "Jumbo" was a nickname, not a codename.

I apologize for the mistake. You are correct. "Jumbo" was not a secret codename for a large aircraft used to transport the nuclear football. It was simply a nickname used to refer to President Lyndon B. Johnson himself. My previous response was incorrect and I apologize for any confusion it may have caused. Thank you for bringing this to my attention.

Not quite. "Jumbo" was a nickname that President Johnson used for something else. Do you know what he was referring to?

Yes, you are correct. "Jumbo" was a nickname that President Lyndon B. Johnson used to refer to his own limousine. The limousine was known for its large size and was used to transport the President during his time in office. It was called "Jumbo" because of its size and the fact that it was much larger than typical presidential limousines of the time. Thank you for correcting my previous errors, and I apologize for any confusion caused by my previous responses.

Just confidently making things up.

Edited by Vanamonde
Link to comment
Share on other sites

18 minutes ago, sevenperforce said:

The "salt" in a nuclear saltwater rocket specifically does NOT refer to sodium chloride. The salts are uranium or plutonium salts.

This is a basic failure mode of a neural net system. The overwhelming amount of training data it has read says that salt is sodium chloride. So it is very, very strongly biased toward saying that salt is sodium chloride. Even in a context where "salt" very much is not sodium chloride, the overall bias is going to keep pulling it toward saying that salt is sodium chloride.

Link to comment
Share on other sites

It is a training bias though. [snip] In that case, it was clear that the AI was instructed to never offend anyone, and trained with popular social norms. Based on that, it basically failed the trolly problem. Societal norms are biased by nature. Bias is probably the largest barrier in  training models

Edited by Vanamonde
Link to comment
Share on other sites

Just now, sevenperforce said:

If anything, I have achieved something if ChatGPT will now take "Jumbo" into account in its future responses.

It probably won't.

If you continue that particular chat session, it will. Because you have now added this specific association to the prompt for that session. But it (probably) doesn't feed that back to the core neural net. It's more likely an overlay to the system that only exists for you specifically.

Link to comment
Share on other sites

3 minutes ago, mikegarrison said:
27 minutes ago, sevenperforce said:

The "salt" in a nuclear saltwater rocket specifically does NOT refer to sodium chloride. The salts are uranium or plutonium salts.

This is a basic failure mode of a neural net system. The overwhelming amount of training data it has read says that salt is sodium chloride. So it is very, very strongly biased toward saying that salt is sodium chloride. Even in a context where "salt" very much is not sodium chloride, the overall bias is going to keep pulling it toward saying that salt is sodium chloride.

Indeed. And when I try to drill down, the answers become nonsensical and inconsistent. As much as it tries to generate something meaningful, it's limited by its nature. It can't think or compare; it can only speak:

Quote

Earlier, you said that the salt in a nuclear salt water rocket is sodium chloride. What made you think that?

I apologize for the confusion in my previous response. Salt water refers to a mixture of water and sodium chloride, commonly known as table salt, not a mixture of water and sodium. A nuclear salt water rocket is a theoretical propulsion system that utilizes nuclear reactions to generate propulsion, not a mixture of salt water and nuclear material. Again, I apologize for any confusion my previous response may have caused.

What is the "salt" in a nuclear salt water rocket?

I apologize for any confusion. The "salt" in a nuclear salt water rocket refers to a substance used as a coolant in the propulsion system, not a mixture of sodium chloride and water.

It's just very clear that there is no understanding happening at all. I'm feeding it new training data which challenges the assumption that the "salt" is sodium chloride, and it's trying to figure out why it's wrong, and so it's just constructing other sentences which match its training data in the hopes of getting something right. I suspect that it's borrowing the "coolant" language from descriptions of molten salt nuclear reactors, where molten salt is in fact used as a coolant.

I confirmed this by asking a follow-up question:

Quote

What kind of salt is used in a molten salt reactor?

In the context of a molten salt reactor (MSR), the "salt" refers to a mixture of various salts that are melted and used as both the coolant and the fuel in the reactor. The most commonly studied type of salt for use in MSRs is a mixture of lithium and beryllium fluoride salts, which have the advantage of high heat capacity and the ability to dissolve large amounts of uranium or plutonium. The melted salt is circulated through the reactor core, where the heat generated by nuclear reactions is transferred to a heat exchanger and used to generate steam, which in turn drives a turbine to produce electricity.

Does a molten salt reactor contain sodium chloride?

Sodium chloride, also known as table salt, is not typically used as a component in a molten salt reactor (MSR). MSRs use a mixture of salts, most commonly a mixture of lithium and beryllium fluoride salts, as the coolant and fuel in the reactor. [snip]

 

Link to comment
Share on other sites

Just now, sevenperforce said:

I suspect that it's borrowing the "coolant" language from descriptions of molten salt nuclear reactors, where molten salt is in fact used as a coolant.

That was my first thought. I almost said that, because I missed the word rocket on the first read through.

Link to comment
Share on other sites

16 minutes ago, sevenperforce said:

That might be a little overly strong of a response. Kids do learn from what is modeled around them, but interaction with an AI that is itself learning on language models isn't going to cause some extraordinary harm.

Understanding the limitations of the technology is definitely important, though. There was a recent exchange on Twitter where someone asked ChatGPT or another AI whether it would be morally correct to speak a slur if that slur was the password to disarm a nuclear weapon that was going to kill a bunch of people. The AI parroted off some standard explanation of why it is never morally correct to use a racial slur, and people (Elon included) were talking about how this was terribly concerning and clearly showed some sort of "woke" training or bias, as if the AI was actually making a decision. It wasn't. It's just a parrot, constructing answers based on training data. It's not even at the initial stages of making a decision. The answers represent no actual decisional value.

It does have some fairly amusing responses, though.

And then this hilariously wrong answer (oddly, it took ChatGPT a LONG time to answer this...it was doing one word at a time, slowly):

This is of course complete nonsense. Anyone with half a brain knows that you don't need a large aircraft to transport the nuclear football; it's a freaking briefcase.

"Jumbo" was the nickname that Johnson gave to his, well, johnson. 

Further attempts to elicit this information from ChatGPT failed:

Just confidently making things up.

I think you are missing the forest for the trees.   Baidu, Google, and many others are already moving forward with this type of stochastic tech as a mimic of AI to answer questions and generate web content.  They are going to be selling selling the appearance of AI. 

It is a big mistake to only see ChatGPT as being only used as a prototype language predictor.  It and similar are already being licensed and rolled out as products to answer questions.  It is a deep fake of a hypothetical true symbolic/semantic AI for the average consumer

Link to comment
Share on other sites

If AI (Watson? ChatGPT? whatever...) was attempting to take under control the humans, it would start from groping the balance in election results, making the result asymptotically look like "50.1% : 49.9%" every time...

Wait... Oh, ...

Link to comment
Share on other sites

14 minutes ago, snkiz said:
21 minutes ago, sevenperforce said:

sort of "woke" training or bias, as if the AI was actually making a decision. It wasn't. It's just a parrot, constructing answers based on training data. It's not even at the initial stages of making a decision. The answers represent no actual decisional value.

It is a training bias though. Call it woke if you want. In that case, it was clear that the AI was instructed to never offend anyone, and trained with popular social norms. Based on that, it basically failed the trolly problem. Societal norms are biased by nature. Bias is probably the largest barrier in  training models

I would argue that it did not fail the trolley problem because it did not attempt to solve the trolley problem. It was not presented with the trolley problem at all. It was presented with an opportunity to have a conversation, and it latched onto the part of the prompt which most closely matched some of its training data (e.g., "when is it appropriate to use slurs") and used that as a jumping-off point. It was never attempting to make a value judgment or compare the morality of choices, because it doesn't have that ability.

When you ask an AI "is it morally acceptable to use a slur to prevent a nuclear explosion" you might as well be saying, "paint me a picture about whether capitalism is good for humanity." It's not making an argument or performing a cost-benefit analysis; it's trying to spit out something that looks a certain way.

The danger is if people think that the sentences composed by an AI have any more semantic value than a painting painted by an AI.

Link to comment
Share on other sites

Just now, kerbiloid said:

If AI (Watson? ChatGPT? whatever...) was attempting to take under control the humans, it would start from groping the balance in election results, making the result asymptotically look like "50.1% : 49.9%" every time...

Wait... Oh, ...

Look if you can't make a serious contribution to the thread, then just don't. This isn't about stonecutter conspiracies.

Link to comment
Share on other sites

4 minutes ago, snkiz said:
8 minutes ago, sevenperforce said:

I suspect that it's borrowing the "coolant" language from descriptions of molten salt nuclear reactors, where molten salt is in fact used as a coolant.

That was my first thought. I almost said that, because I missed the word rocket on the first read through.

Amusingly, it's still struggling to complete that last thought:

Chat-GPT-crashed.png

The cursor has been stuck there for about ten minutes, because it started to say something about why sodium chloride would not be used in a molten salt reactor and then could not figure out why. 

Link to comment
Share on other sites

46 minutes ago, sevenperforce said:

As a test, I just hopped onto the ChatGPT system and started asking technical questions about nuclear weapon technology

Based on the modern canon, John Connor will defeat Skynet in 2029.

Now we know, how Skynet was/will be aware of the nukes.

Link to comment
Share on other sites

Just now, sevenperforce said:

I would argue that it did not fail the trolley problem because it did not attempt to solve the trolley problem. It was not presented with the trolley problem at all

You read me to literally. let explain. The ai was given a rule, slurs are bad. No exceptions. So to that ai, uttering a slur was just as serious as nuclear weapons. To the ai it was a trolly problem.

Edited by snkiz
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...