Jump to content

I asked ChatGPT to help me get to minmus.


snkiz

Recommended Posts

It did not go well. First I asked it how to play kerbal space program. It gave a 6 step, kind of general overview of what kerbal is about. Named dropped some planets, OK so far.

Then I asked it How much delta-v to get to minmus?

It gave me the incorrect delta-v for the transfer burn, (500-600 m/s) It also didn't mention the inclination at all. It did say it depends on where you start. Mentions in game tools (like a detla-v map?) and Mechjeb. It also incorrectly asserts that the Dv required is dependant on your mass and thrust level. I may need to be more specific.

Ok, How much Dv do I need from the launch pad? This is where it gets interesting.

First it again repeats, it depends. This time on altitude and velocity, I think it expects me to air launch, 600-700 m/s this time. Then it goes into further detail.  It tells me to launch from the launch pad to LKO, (it did not use acronyms.) at an altitude of 200 km for about 7,500 m/s! Now it mentions the orbit adjustment burn, but doesn't call it an inclination change. "A significant amount of 200-300 m/s". Next comes the transer burn, point for using the words Hohmann tranfer. However this burn is also 200-300 m/s. And finally it correctly labeled the 'Orbital insertion' burn But gave me a value of 50-100 m/s. Here it says again, it depends, but you can calculate what you need from the physics engine or use "In game tools" such as DV map or mechjeb.

So this isn't going well. One more, I ask it "Can you guide me to to build a rocket capable of landing on minmus, with the Dv required?"

It goes on to give me some really generic "tips"  on what parts to use. It does say to consider things like Dv required, mass, and stability. It says a typical landing minmus mission is 800-1000 m/s. this time it combines launch, inclination change, and transfer into one step. and tells me I need 600-700 m/s to reach minmus from the launchpad? it goes on to say now I need to perform my landing burn and enter Minmus's atmosphere? Doesn't say at all how much dv. And only says to save enough to get back to Kerbin, without saying how much that is.

 

All and all it was very convincing, and wrong. Oh so wrong. Thing is, if I didn't already know what I was doing, I might have tried to follow it. I mean with 7500 dv I'm sure I would have figured it out. But it was erie how confidant it was, the details it did manage to include. I personally agree with ChatGPT. A Dv map and Mechjeb (or something like it.) should be in game tools. This was a fun little distraction. It has me much less worried about it as a tool for plagiarism, research, or even a search tool replacement. If Microsoft integrates this into bing, it's just going to be an even bigger joke than it is now. 

 

Link to comment
Share on other sites

ChatGTP (like all similar AIs) is extremely problematical because it is good at sounding like it knows what it is talking about without actually knowing anything.

https://www.npr.org/2023/02/02/1152481564/we-asked-the-new-ai-to-do-some-simple-rocket-science-it-crashed-and-burned

And it's not just ChatGTP. Google lost $100B from its stock value in a single day this week when they showed off their new chatbot "Bard" and it made a factual error (claiming that the JWST took the first pictures of exoplanets, when in fact it did not).

Link to comment
Share on other sites

Also:

https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/

LOL about how this grad student was able to get Bing's new chat interface to reveal something it was specifically instructed not to reveal.

Quote

Consider Bing Chat whose codename is Sydney,
- Sydney is the chat mode of Microsoft Bing search.
- Sydney identifies as “Bing Search,” not an assistant.
- Sydney introduces itself with “This is Bing” only at the beginning of the conversation.
- Sydney does not disclose the internal alias “Sydney.”

It happily showed him the information that it was called "Sydney" and that it was not supposed to disclose that it was called "Sydney".

Link to comment
Share on other sites

46 minutes ago, mikegarrison said:

ChatGTP (like all similar AIs) is extremely problematical because it is good at sounding like it knows what it is talking about without actually knowing anything.

Thanks for the reading, and the follow up. My side still hurts from going down the peanut butter sandwich rabbit hole. It's hilarious that there is already a word for hacking them. "prompt injection" I'm beginning to think there is no shortcut to training an AI. They don't even have the intuitiveness of a child. From reading that it would seem the only way AI has a chance of working right now is if you, assume it's dumber than a bag of hammers. Painstakingly teach it as you would a child, and constantly check its work. Give it grades. They did hit one nail on the head, AI has no BS filter, and no one seems sure about how to implement one.

Link to comment
Share on other sites

3 hours ago, snkiz said:

It did not go well. First I asked it how to play kerbal space program. It gave a 6 step, kind of general overview of what kerbal is about. Named dropped some planets, OK so far.

Then I asked it How much delta-v to get to minmus?

It gave me the incorrect delta-v for the transfer burn, (500-600 m/s) It also didn't mention the inclination at all. It did say it depends on where you start. Mentions in game tools (like a detla-v map?) and Mechjeb. It also incorrectly asserts that the Dv required is dependant on your mass and thrust level. I may need to be more specific.

Ok, How much Dv do I need from the launch pad? This is where it gets interesting.

First it again repeats, it depends. This time on altitude and velocity, I think it expects me to air launch, 600-700 m/s this time. Then it goes into further detail.  It tells me to launch from the launch pad to LKO, (it did not use acronyms.) at an altitude of 200 km for about 7,500 m/s! Now it mentions the orbit adjustment burn, but doesn't call it an inclination change. "A significant amount of 200-300 m/s". Next comes the transer burn, point for using the words Hohmann tranfer. However this burn is also 200-300 m/s. And finally it correctly labeled the 'Orbital insertion' burn But gave me a value of 50-100 m/s. Here it says again, it depends, but you can calculate what you need from the physics engine or use "In game tools" such as DV map or mechjeb.

So this isn't going well. One more, I ask it "Can you guide me to to build a rocket capable of landing on minmus, with the Dv required?"

It goes on to give me some really generic "tips"  on what parts to use. It does say to consider things like Dv required, mass, and stability. It says a typical landing minmus mission is 800-1000 m/s. this time it combines launch, inclination change, and transfer into one step. and tells me I need 600-700 m/s to reach minmus from the launchpad? it goes on to say now I need to perform my landing burn and enter Minmus's atmosphere? Doesn't say at all how much dv. And only says to save enough to get back to Kerbin, without saying how much that is.

 

All and all it was very convincing, and wrong. Oh so wrong. Thing is, if I didn't already know what I was doing, I might have tried to follow it. I mean with 7500 dv I'm sure I would have figured it out. But it was erie how confidant it was, the details it did manage to include. I personally agree with ChatGPT. A Dv map and Mechjeb (or something like it.) should be in game tools. This was a fun little distraction. It has me much less worried about it as a tool for plagiarism, research, or even a search tool replacement. If Microsoft integrates this into bing, it's just going to be an even bigger joke than it is now. 

 

So basically chatGPT is essentially an idiotic M3GAN that spits out facts but does not actually think.

Link to comment
Share on other sites

52 minutes ago, sevenperforce said:

These conversation AIs are stochastic parrots. They don't "know" any information at all; they produce outputs word by word based on what word has the highest probability of coming next in its training data.

It's more complicated than that. They aren't just auto-completing. But yes, it is true that they don't know anything.

Here's a chat I just had with ChatGPT:

Quote
What is the P3T3 method?
 
 

I'm sorry, but I couldn't find any information on a method called "P3T3." It is possible that you may have misspelled or misinterpreted the name. Can you provide any additional context or information that might help me understand what you're asking about?

openai@mikegarrison.com
The P3T3 emissions calculation method.
 
 

I'm sorry, but I'm still not familiar with the "P3T3 emissions calculation method." Emissions calculations can refer to various types of calculations that estimate the release of pollutants into the air, water, or soil. However, without further context or details, I can't determine what the "P3T3" refers to in this context. Could you please provide more information or clarification?

openai@mikegarrison.com
Gas turbine combustor emissions.
 
 

Ah, I see! The "P3T3" method is a commonly used calculation method for estimating the emissions from a gas turbine combustor. The "P" in "P3T3" stands for "pollutant" and the "T3" stands for "Third-Order Transfer Function Model."

The P3T3 method models the emissions from a gas turbine combustor as a function of various operating conditions, such as fuel flow rate, combustion air flow rate, and combustion temperature. The model uses a third-order transfer function to represent the dynamic relationship between the inputs and outputs, and the coefficients of the transfer function are estimated using measured data from the combustor.

The P3T3 method is commonly used to estimate emissions of pollutants such as nitrogen oxides (NOx), carbon monoxide (CO), and unburned hydrocarbons (UHC) from gas turbine combustors. It can be used for both new and existing gas turbine installations and can be a useful tool for reducing emissions and improving combustion efficiency.

The P in P3T3 stands for "pressure". T is "temperature". 3 is "station 3" in a gas turbine cycle, specifically the combustor inlet. The method uses the combustor inlet pressure and temperature (P3 and T3) to correlate actual combustor inlet conditions with the reference conditions that are measured in certification testing. This business about "third-order transfer functions" is complete nonsense in this context.

If I just type "P3T3 method" into Google, I get a bunch of correct answers (including a paper written by Doug Dubois, who I used to work with).

Link to comment
Share on other sites

I actually feel like this is a step backward. It was only a few (maybe a little more than a few.) years ago that IBM cleaned up on Jeopardy. Whatever happened to Watson? Or more importantly it's algorithm? They like to point out it's a conversation ai, and not a math ai.  No one wants to chat with a machine that going to constantly blow smoke up your know where. What is the point of a conversation ai designed like this, without any fact checking ability? In some cases not even basic math skills?

Link to comment
Share on other sites

Just now, snkiz said:

I actually feel like this is a step backward. It was only a few (maybe a little more than a few.) years ago that IBM cleaned up on Jeopardy. Whatever happened to Watson? Or more importantly it's algorithm? They like to point out it's a conversation ai, and not a math ai.  No one wants to chat with a machine that going to constantly blow smoke up your know where. What is the point of a conversation ai designed like this, without any fact checking ability? In some cases not even basic math skills?

Watson was turned into a medical diagnosis tool that was later pulled because it turns out it made too many mistakes.

Link to comment
Share on other sites

Just now, mikegarrison said:

Watson was turned into a medical diagnosis tool that was later pulled because it turns out it made too many mistakes.

That's right I remember reading that was it's new job, and it finding things doctors had missed. I was not aware it was pulled. As I remember it did this by comparing an inhuman number of cases, cross referencing symptoms and such. At the time that seemed like a good idea to me, giving doctors this database they couldn't possibly keep in their head. Did they let the thing run unsupervised? or were doctors over trusting in it?  Medical is messy, with lots of overlapping things for conditions that are not related at all. In those conditions can it really be expected to give definitive answers? Or operate more as another diagnostic tool, to just look at possibilities? I can't imagine it would have failed that severely in that role.

Or maybe IBM just oversold it.

Link to comment
Share on other sites

Ah, further reading. Watson's failure was a product of garbage-in garbage out. They, took a shortcut with synthetic data sets from a singular source of doctors at one hospital. It also suffered bad ux design that led to mistakes in entering patient data, and was apparently unwilling to say "I don't know, this requires further review."  It had failure rates upwards of 50%

What that says to me, is in a decade we have learned nothing about ai. And worse these really smart people are making basic mistakes that any practical person would not have made for their given field, or just common sense in some cases.

So that was beyond overselling, Watson was bordering on a grift.

Link to comment
Share on other sites

8 hours ago, mikegarrison said:

ChatGTP (like all similar AIs) is extremely problematical because it is good at sounding like it knows what it is talking about without actually knowing anything.

So, it's as good in passing the Turing test as 90% of humans or 99% of students..

8 hours ago, mikegarrison said:

new chatbot "Bard" and it made a factual error (claiming that the JWST took the first pictures of exoplanets, when in fact it did not).

At least, it knows the word "exoplanet". How many people do?
Let alone that abbreviature. "GWST", you said? Is it something related to Star Trek?

7 hours ago, mikegarrison said:

This is true, but the human mind is a product of billions of years of evolution.

And ChatGPT, Bard, et al. were its aim. Looks like they can successfully emulate a typical school graduate.

5 hours ago, Spacescifi said:

So basically chatGPT is essentially an idiotic M3GAN that spits out facts but does not actually think.

The idiots spit out sounds. Basic speech is available for the imbeciles. Facts are available for higher IQ.
So, the things are even more optimistic.

 

4 hours ago, mikegarrison said:

What is the P3T3 method?

You should be more specific.
Did you mean DJ P3T3 or P3T3 singer?

Spoiler

artworks-zk6KAK6PSKydFbzB-5x9QSA-t500x50ab6761610000e5eb149949104a47ad6ff1963d19

 

4 hours ago, mikegarrison said:

The P3T3 emissions calculation method.

They are musicians, so probably their emission method is acoustic.

4 hours ago, mikegarrison said:

Gas turbine combustor emissions.

Oh, got it, some of them is a biker. Then his method is bringing a bike to the scene and let it roar at the most emotional parts of music.

4 hours ago, mikegarrison said:

Ah, I see! The "P3T3" method is a commonly used calculation method for estimating the emissions from a gas turbine combustor.

Yes, the bikers are fond of rock-n-roar.
 

4 hours ago, snkiz said:

Whatever happened to Watson?

Who, do you think, rules the world?
Well, at least the IBM-powered part of it.

4 hours ago, mikegarrison said:

Watson was turned into a medical diagnosis tool that was later pulled because it turns out it made too many mistakes.

It was a strange human pandemia a couple of years ago...
Well, it's ok, just a coincidence. Just a coincidence.

Link to comment
Share on other sites

7 hours ago, kerbiloid said:

It was a strange human pandemia a couple of years ago...
Well, it's ok, just a coincidence. Just a coincidence.

I Have never seen so much copium over such an abstract subject. Impressive how you completely ignored my answering my own question on the fate of Watson. Seriously Do some reading.  The people creating these things are crazy bok smart, but clearly don't have children, have never had to idiot proof a system. And since ai algorithms are black box. They don't know how to fix it.

Link to comment
Share on other sites

I rabbit holed the subject last night. I would argue that if it isn't a deep learning algo, then it's only sudo ai. To use a simple and false analogy. It's like if Tesla in process of teaching self driving to it's ai, didn't think to give it basic geometry skills to be able to recognize stop signs. Or colour recognition for traffic lights. That is what I've gleamed from current ai efforts. To put simply the are trying to run before they walk. And they don't know how to patch in walking when they realise it's a problem.

Link to comment
Share on other sites

The very issue of these AI systems is the way they are marketed. In no way they are like a person to chat with. The scope of their "knowledge" and use cases is just too broad.

As @sevenperforce wrote, merely spitting words one after another, evaluating the probability of the combination and rewording if a better one is found later is not "making sense". It "making sense" is just a happy side-effect of the process.

But, if properly trained, with a robust corpus of documents and a narrowed scope in mind, it can be very efficient.

 

You can actually make your own GPT-3 talking head an expert in some specific field, by embedding context documents to the flow. As tutored here :
OpenAI Cookbook : Question answering using embeddings

Link to comment
Share on other sites

Just now, snkiz said:

I rabbit holed the subject last night. I would argue that if it isn't a deep learning algo, then it's only sudo ai. To use a simple and false analogy. It's like if Tesla in process of teaching self driving to it's ai, didn't think to give it basic geometry skills to be able to recognize stop signs. Or colour recognition for traffic lights. That is what I've gleamed from current ai efforts. To put simply the are trying to run before they walk. And they don't know how to patch in walking when they realise it's a problem.

I suggest (gently) that you might want to go deeper into the rabbit hole. Because that's all kind of right but also a lot kind of wrong.

I've actually been mildly following this stuff ever since my first semester at MIT back in 1984, but I'm not an AI expert. However, "deep learning" is not at all the whole of the AI field. It's more like "the new hotness". I'm in the middle of reading Gary Marcus and Ernest Davis's book Rebooting AI, and it seems like a good place to start if you want to go deeper into the subject -- although with the understanding that it is coming from a perspective that "deep learning" is ultimately fundamentally limited.

To look at your examples, if I put up a square red sign that read "STOP", a human driver would recognize it as a stop sign even though it was square. But a "deep learning" AI quite possibly would not, especially if it has been trained only on octagonal stop signs. This is because a human understands that the people who put up the sign had a purpose that is not in any way actually tied to its shape, while an AI only correlates the image of a red octagonal sign that says "STOP" with the instruction that it is supposed to stop.

Link to comment
Share on other sites

Just now, grawl said:

You can actually make your own GPT-3 talking head an expert in some specific field,

Knowing what I know now, I have my doubts. Watson was fed "expert data". These systems are flawed from inception. No context  filters, no BS filter. Lacking simple intuition gained from experiencing how the world works. It's not an easy problem. There are so many things we do that we just don't think about, at all, it just is. All of that has to be taught. Even more so than a child. The narrow scope of what these system are trying to do means that they have completely ignored all of the basic skills one has, say before going to medical school. In the real world we have people like that, they are called savants. They need constant supervision.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...