Jump to content

Why I Do Not Fear AI...


Recommended Posts

15 hours ago, DDE said:

Nope.

Quote

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]

 

Link to comment
Share on other sites

I've had a drink at the bar tonight that's been advertised as invented by ChatGPT. The bartending career is 100% safe for now.

On that topic of that "simulation," somebody commented that when military says "simulation," they mean a LARP, and I can't get over that.

Link to comment
Share on other sites

4 hours ago, K^2 said:

I've had a drink at the bar tonight that's been advertised as invented by ChatGPT. The bartending career is 100% safe for now.

On that topic of that "simulation," somebody commented that when military says "simulation," they mean a LARP, and I can't get over that.

No kidding, I too was wondering just how "abstract" the simulation was. Were they just using ChatGPT that was told to narrate as a drone?

Link to comment
Share on other sites

On 6/3/2023 at 3:01 AM, K^2 said:

I've had a drink at the bar tonight that's been advertised as invented by ChatGPT. The bartending career is 100% safe for now.

One of my friends (a software developer) admitted that he uses ChatGPT to help him write code. I'm like "...okay?." How bad is his code that ChatGPT is better? (But then he's management, so he doesn't do a lot of coding anymore.)

Link to comment
Share on other sites

3 hours ago, PakledHostage said:

One of my friends (a software developer) admitted that he uses ChatGPT to help him write code. I'm like "...okay?." How bad is his code that ChatGPT is better? (But then he's management, so he doesn't do a lot of coding anymore.)

Lots of software development is pretty brain dead like sql database updates and inserts. from an object who roughly matches the table. Same for front end to edit that row. Now here you get lots of exceptions but simply making an form with entry fields would help. 
 

Link to comment
Share on other sites

2 hours ago, kerbiloid said:

Let AI rewrite its own source.

AI is an neural learning network not standard code. It might getting trained by use but this has some risk in shifting it in an unwanted direction. 
And the real issue here is that we can not document how AI works, you can simulate but that only proves that this exact setting work as intended change the parameters a bit and it might not work. 
You can safeguard the systems to some degree much the same way we make stuff safe for humans.   
 

Link to comment
Share on other sites

On 6/4/2023 at 8:27 AM, PakledHostage said:

One of my friends (a software developer) admitted that he uses ChatGPT to help him write code. I'm like "...okay?." How bad is his code that ChatGPT is better? (But then he's management, so he doesn't do a lot of coding anymore.)

It's surprisingly not awful in a lot of simple cases. It can even be queried for some simple fixes. A test I've ran with 3.5 in the spoiler below.

Spoiler
Quote
Hi. I have a small C++ program, and it doesn't work correctly. It just runs forever. Could you fix it?
 
int main()
{
    const char* msg = "Hello World!";
    size_t z = 0;
    for (;msg;++msg,++z);
    std::cout << z << std::endl;
    return 0;
}

 

ChatGPT

Certainly! I can help you with that. The issue in your program is with the termination condition of the for loop. Let's go through the code and fix it.

 
cppCopy code
#include <iostream>

int main()
{
    const char* msg = "Hello World!";
    size_t z = 0;
    for (; *msg; ++msg, ++z)
    {
        // loop body
    }
    std::cout << z << std::endl;
    return 0;
}

 

In your original code, the termination condition of the for loop was msg, which is a pointer. In C++, a null character ('\0') is used to terminate C-style strings. So, the corrected termination condition should be *msg, which checks whether the current character being pointed to by msg is null or not.

By dereferencing msg using *msg, we can access the value being pointed to and check if it is null. If it is null, the loop will terminate, as expected.

Now, when the loop reaches the null character at the end of the string, *msg evaluates to false, and the loop will exit.

Let me know if you have any further questions or issues!

 

I intentionally avoided giving it clues to the meaning of the z variable and made the mistake a single symbol error that results in a code that compiles, but does not work correctly. As you can see, not only has it found the error, but it fixed the code, wrote explanation of what went wrong, and gave some recommendations. Yes, I'm sure this is a common enough problem with beginner code where a match exists somewhere, but that's kind of the whole point - if ChatGPT has seen code that does what you're asking it to do, it will be able to do a good job with it.

What it does horribly with is context that involves new things. Particularly bad if you have a large code base that the code you're writing has to interact with. There is no way for ChatGPT to learn that context, even if it's part of the input, because it's still trying to match against its training set. So it's far more likely to hallucinate something unhelpful.

In short, yes, you can use it to speed up certain aspects of writing code. You have to know what you're doing, though, and know where the limitations are. I'm still dubious about this expediting work of someone who's proficient in a language and the type of problem being solved, but I can see myself trying to use ChatGPT to build something I know how to make but in a language that I don't know terribly well or have forgotten over the years. Like, if I had to write Java code, or something.

Link to comment
Share on other sites

So Boston Dynamics has put SPOT  on the market at $75k, IIRC. They reallly need to make that chassis mass-producible to bring the cost down by a couple of orders of magnitude. Then the attach node could mount a vacuum cleaner, with a robotic snake-hose to reach wherever it needs to. The AI would be needed to identify dirt and then steer the hose to remove the dirt.

Yeah. That would be scary, never knowing where it'll decide it needs to stick the hose, and how it'll get it there...

Link to comment
Share on other sites

9 minutes ago, StrandedonEarth said:

So Boston Dynamics has put SPOT  on the market at $75k, IIRC. They reallly need to make that chassis mass-producible to bring the cost down by a couple of orders of magnitude. Then the attach node could mount a vacuum cleaner, with a robotic snake-hose to reach wherever it needs to. The AI would be needed to identify dirt and then steer the hose to remove the dirt.

Yeah. That would be scary, never knowing where it'll decide it needs to stick the hose, and how it'll get it there...

Does Spot do anything other than move by remote control?

Link to comment
Share on other sites

45 minutes ago, tater said:

Does Spot do anything other than move by remote control?

Well, as far as I can tell, Spot is much like a horse: a self propelled chassis. What it can do depends on what you equip it with; with Spot that's usually cameras and other sensors for inspections, although there is also an arm available. And it can operate follow a programmed route autonomously. I think it's the ATLAS testbed that's still under remote guidance.

https://www.bostondynamics.com/resources/blog/doing-more-spot

Just replace/upgrade the controller tablet with AI and stand back...

Link to comment
Share on other sites

1 hour ago, StrandedonEarth said:

Well, as far as I can tell, Spot is much like a horse: a self propelled chassis. What it can do depends on what you equip it with; with Spot that's usually cameras and other sensors for inspections, although there is also an arm available. And it can operate follow a programmed route autonomously. I think it's the ATLAS testbed that's still under remote guidance.

https://www.bostondynamics.com/resources/blog/doing-more-spot

Just replace/upgrade the controller tablet with AI and stand back...

Adapting Tesla's FSD retrained with pedestian data could be interesting

Link to comment
Share on other sites

6 hours ago, StrandedonEarth said:

So Boston Dynamics has put SPOT  on the market at $75k, IIRC. They reallly need to make that chassis mass-producible to bring the cost down by a couple of orders of magnitude.

Unitree Go1 is marketed at $2700.

Link to comment
Share on other sites

8 hours ago, tater said:

 

Still a sort of programmed inspection tour I think, vs it being fully autonomous, but cool.

My overall concern is down the road when older staff that knows how to inspect and run the plant manually ages out and retires.  A skeleton human staff remains with vague and hesitant hands-on skills, if any.  Then a repeat of the Carrington Event an EMP occurs.  This would be an extreme challenge for a full staff with excellent hands-on skills so not optimistic about that being non-existent in an emergency

Edited by darthgently
Link to comment
Share on other sites

On 6/6/2023 at 8:19 AM, DDE said:

Unitree Go1 is marketed at $2700.

https://shop.unitree.com/products/unitreeyushutechnologydog-artificial-intelligence-companion-bionic-companion-intelligent-robot-go1-quadruped-robot-dog
It looks cute,  even have an carrying handle, I would give it more of an dog looking head and an tail for more cuteness. 

 

Link to comment
Share on other sites

2 hours ago, darthgently said:

My overall concern is down the road when older staff that knows how to inspect and run the plant manually ages out and retires.  A skeleton human staff remains with vague and hesitant hands-on skills, if any.  Then a repeat of the Carrington Event an EMP occurs.  This would be an extreme challenge for a full staff with excellent hands-on skills so not optimistic about that being non-existent in an emergency

My wife doesn't do robotic surgery. Many of the robot cases make smaller holes in people, but also take longer. Sometimes much longer. There's a new guy at the hospital who does loads of robot cases, and he'll take 3 hours on something she'd do in an hour or so as an open case. but here's where the analogy is to what you mention—sometimes robotic cases can't be done with the robot, and they have to convert on the table to open cases. My wife said a lot of the younger surgeons have very little experience with open procedures. So they mess around a few hours with the robot, realize they have to convert, then they convert and it takes a few more hours—for a case the older docs would do open in an hour total. Yeah, will be odd when nobody can do the task themselves.

Another surgical example. My father in law said that when he was teaching neurosurgery in Nepal right after he retired, a case had to convert from modern anesthesia to chloroform dripped into cloth. Older docs there knew it as well, but the young ones had never seen it before. So he was not only running the surgery, he had to instruct someone on dropping the chloroform at the right rate to keep the patient asleep, lol.

Link to comment
Share on other sites

2 hours ago, tater said:

So they mess around a few hours with the robot, realize they have to convert, then they convert and it takes a few more hours—for a case the older docs would do open in an hour total.

Ask the anesthesiologists how they feel about their 70+ yo patient spending even one unnecessary minute knocked out. And the entire time the whole team is getting more weary and error prone

Link to comment
Share on other sites

3 minutes ago, darthgently said:

Ask the anesthesiologists how they feel about their 70+ yo patient spending even one unnecessary minute knocked out. And the entire time the whole team is getting more weary and error prone

Yeah, there's a time and place for robotic surgery, but it is overhyped according to my wife. Ideal in some cases, no difference in some, worse in some.

Regardless, as robotic becomes more and more standard, the experience doing open cases in the surgeon population drops. Heck, this about charting is true, and hilarious:

 

 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...