Jump to content

Why I Do Not Fear AI...


Recommended Posts

Shower thought: this whole "injection" thing is really just a Jedi mind trick, especially when these bots have really short prompts.

Link to comment
Share on other sites

4 hours ago, tater said:

Until it gets embodied—then what happens to the artisanal cheese-makers?

None has been able to make an general robot as in I robot, not even an VR gizmo who is two arms and an camera for head who can be used to do human tasks. Not even in space there budgets are in the billions and astronauts has to do dangerous EVA and use clumsy space suits. You could even use this on an robotic mission who don't have to come back.
Yes robotic hands is a thing, are they to unreliable or weak / slow?  

Link to comment
Share on other sites

44 minutes ago, magnemoe said:

None has been able to make an general robot as in I robot, not even an VR gizmo who is two arms and an camera for head who can be used to do human tasks. Not even in space there budgets are in the billions and astronauts has to do dangerous EVA and use clumsy space suits. You could even use this on an robotic mission who don't have to come back.
Yes robotic hands is a thing, are they to unreliable or weak / slow?  

Not yet. It;s not about budgets, it's about the model being there that works—which was literally not a thing until the last year or two.

Link to comment
Share on other sites

14 hours ago, tater said:

Not yet. It;s not about budgets, it's about the model being there that works—which was literally not a thing until the last year or two.

Its not about software but hardware then we talk about robot arms.

Link to comment
Share on other sites

4 hours ago, magnemoe said:

Its not about software but hardware then we talk about robot arms.

True to an extent, though Boston Dynamics has managed decent robotic parts for a while now.

We'll have to see how "Optimus" turns out I suppose.

Link to comment
Share on other sites

On 5/25/2023 at 6:55 PM, Spacescifi said:

You have heard it said "Guns don't kill people. People do."

Scifi and fear mongers may fear AI but the only thing to fear is... us. And that will always be so long humans remain human. The good news is that not all humans are evil, and that too will always be so long humans are human.

 

So I don't fear AI. It is just a tool.

 

As for a thinking real sentient AI... do I fear it?

No. Because right now... it's hilarious... not even remotely a threat.

 

No one feared the car will ever replace their horse, back when the car was moving at barely walking speed.

And look at it now.

 

It's the right approach to not fear this and think of it like a tool, but I'm sure what we see today is only going to be better, until it needs to human to prompt it to do something.

 

I've seen people on youtube already toy with this by getting instances of GPT's prompting each other, like (Smith) agents, until they reach and perfect the outcome for the goal the human has given them.

 

What a time to be alive.

Edited by GGG-GoodGuyGreg
Link to comment
Share on other sites

4 hours ago, tater said:

True to an extent, though Boston Dynamics has managed decent robotic parts for a while now.

We'll have to see how "Optimus" turns out I suppose.

True and an human hand copy is mostly relevant in an VR remote operation setting. But just this is an huge market, not only space but disabling explosives and removing dangerous materials is much larger. 

Link to comment
Share on other sites

23 hours ago, farmerben said:

 

Now that is probably how people on the rear row experienced the sermon on the mount, one of the most significant speeches in history.  And they could not watch it on streaming some months later :) 
Guess the first bible came out after they was dead and it would be out of range of them anyway. 

Now for the Life of Brian Movie this scene is significant as it shows that Brian was not playing as Jesus Christ but lived at the same time and was caught up in the mad events.
So don't get why Christians hate this movie as much, again the trend at the time was mad. 
One generation later Rome sacked Palestine after another uprising, this was very rare for Roman hold provinces, not even standard for conquered ones. 
 

Link to comment
Share on other sites

2 hours ago, kerbiloid said:

1930: A car has replaced a horse for you.
2030: AI has explained you that you need neither.

Horses was obsolete in the US military at in WW 2. 
US special forces used horses in Afghanistan. It was even some cavalry charges in that war US personnel took part in and an horseback patrol worked well in the broken terrain and some guys on horses did not draw the attention that an military off-road truck would draw. 
 

Link to comment
Share on other sites

1 hour ago, magnemoe said:

True and an human hand copy is mostly relevant in an VR remote operation setting. But just this is an huge market, not only space but disabling explosives and removing dangerous materials is much larger. 

A human hand copy is most useful for a robot that is to work in existing human environments, with existing tools.

Space uses, or UXB dearming, etc—those can have bespoke tools. A robot in a home, or existing business—that needs to work in areas where humans can already move around, using tools that are at hand.

Link to comment
Share on other sites

4 hours ago, farmerben said:

On the Bataan peninsula some US cavalry with Colt .45 pistols opposed a river crossing by a tank and won.

In Sengoku Jedi they took out a whole armored group  even without pistols.

Spoiler

 

Even the tank and the helicopter.

Edited by kerbiloid
Link to comment
Share on other sites

-Why don't you fear AI?

-Because there's already someone braver stepped forward.

https://www.sohu.com/a/679815534_158264

Spoiler

There are some robots in China's hospitals for guiding and answering some simple questions. Because the lady's family might be in a serious condition, so she cried out in anxiety. Then the robot came up to her and recognized that she was crying and asked, "Do you need me to tell you a joke?"

F**g deserve it dumb robot

Edited by steve9728
Link to comment
Share on other sites

On 5/26/2023 at 2:02 PM, K^2 said:

This is going to be a huge problem in the nearest future. I am concerned that LLM's are being introduced throughout with very little regard for the safety of data, and since we can't solve the alignment problem, there is absolutely no way to guarantee that a given LLM will not misuse the data that it's given access to. Security breaches we're going to see in the next few years are going to be among the most spectacular. We're talking private info about people, financial accounts of individuals and companies, government secrets... It's going to be a huge mess. And I don't think legislature's going to keep up with how fast these systems are evolving. There's going to be a huge amount of damage done. And not because AI id clever and malicious. It's going to be because AI is naive and trying to be helpful. That's the real danger.

GDPR is thankfully technology agnostic and holds the leaking company as the responsible party. So I am covered even when AI gains citizenship and is considered employee instead of tool. Of course the companies can't be bothered before the compensations make the first big one go under. Completely foreseeable economic repercussions? They don't exist without an exact precedent. /s

Also,

Spoiler

All our thoughts have come
Here but now they're gone
Reasons don't fear the AI
Nor do the goal, the deed or the crop
We can be like they are

Think on, baby (don't fear the AI)
Baby, take my mind (don't fear the AI)
We'll be able to confer (don't fear the AI)
Baby, I'm your machine

Link to comment
Share on other sites

I am a bit late to this party, but I read this thread and have been talking about it a bit with colleagues today. One of them told me about a test where GPT-4 successfully asked someone on TaskRabbit to solve a Captcha for it to get it past the "I am not a robot" roadblock for bots on a website (ref. GPT-4 solves Captcha).

To be fair, the GPT-4 instance was told to lie about why it needed help with the Captcha and to lie about it actually being a robot, so it didn't come up with that idea on its own... But this is still an example that's indicative of AI safety concerns to come. (The movie "Ex Machina" springs to mind...)

 

Link to comment
Share on other sites

artificial intelligence has to be better than natural stupidity. 

worry when they invent artificial stupidity. 

Edited by Nuke
Link to comment
Share on other sites

15 hours ago, Nuke said:

artificial intelligence has to be better than natural stupidity. 

worry when they invent artificial stupidity. 

Done so many times, vlw4gfox3f881.jpg?width=640&crop=smart&a
Note that this will work even on modern advanced AI unless trained for it. 
An very classic strategy against AI in most real time strategy game is to have it chase you around so they come in strung out against your well fortified and prepared lines. 
 

Link to comment
Share on other sites

24 minutes ago, magnemoe said:

Done so many times, vlw4gfox3f881.jpg?width=640&crop=smart&a
Note that this will work even on modern advanced AI unless trained for it. 
An very classic strategy against AI in most real time strategy game is to have it chase you around so they come in strung out against your well fortified and prepared lines. 
 

Most so called AI in games are algorithms coded with simple rules by the programmer, rather than neural nets that evolve giant matrices of floating point numbers.  Exceptions exist, such as alphastar that plays Starcraft II.  It won't make as many predictable mistakes as an algorithm does. 

Link to comment
Share on other sites

OK, jokes over.

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

Quote

As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.  Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.

This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.

 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...