Jump to content

Why I Do Not Fear AI...


Recommended Posts

1 hour ago, Kerbart said:

there seems very little movement in replacing backbreaking and mindnumbing jobs (the "nobody wants to work these days" kind of jobs) with robots

I work in a retread plant. One of the steps is skiving, where damaged spots on the buffed casing are smoothed out, much like a dentist preparing a tooth for a filling. Using air-powered grinders, so it tends to be hard on the hands and wrists (my hands were like claws when I woke up, took a bit of working to flex them). A skiving post takes about 5x5’ square, about the size of an half-bathroom. I’ve been told they’ve created a machine for skiving, which takes up about the same space as a room. A large room. I have no idea how long it takes compared to a human. There doesn’t seem to be any hurry to get these machines (which I bet will break down a lot) into widespread service. And I’m sure a human will still need to make sure the machine didn’t miss anything 

Link to comment
Share on other sites

7 hours ago, Kerbart said:

"The future would brings us robots doing repetitive and physical work, freeing us up for creative work and recreation"

"Instead we have tons of people doing menial tasks in a gig economy and robots writing poetry. Something went wrong"

n9pwmgl.png

Link to comment
Share on other sites

Honestly, I don't think AI can take over the world either. Well, maybe in the future, actual AI will be developed and have the potential to end the world. But AI like ChatGPT isn't even true AI, they just give answers based off of given inputs.

Also, "AI" isn't even very smart, like evidenced in the above post. If AI can't figure out how to play Hangman, it can't end the human race.

Edited by TwoCalories
Italicized "Maybe" because it's really quite unlikely, IMO.
Link to comment
Share on other sites

7 hours ago, StrandedonEarth said:

A skiving post takes about 5x5’ square, about the size of an half-bathroom. I’ve been told they’ve created a machine for skiving, which takes up about the same space as a room. A large room. I have no idea how long it takes compared to a human. There doesn’t seem to be any hurry to get these machines (which I bet will break down a lot) into widespread service.

Spoiler

 

It can use human tools.
 

2 hours ago, Kerbalsaurus said:

the-depth-of-wisdom-that-chatgpt-provides-v0-gwyke277emla1.jpg?width=1080&crop=smart&auto=webp&s=0a92dcfd0a21ca9157dd65754d810f063a3ac043

"thorn" for "th"

"11" for German "elf", as English is a German language.

"tw11þ" is a 5-letter "twelfth"

Don't underestimate the sneaky AI.

Link to comment
Share on other sites

6 hours ago, TwoCalories said:

Honestly, I don't think AI can take over the world either. Well, maybe in the future, actual AI will be developed and have the potential to end the world. But AI like ChatGPT isn't even true AI, they just give answers based off of given inputs.

Also, "AI" isn't even very smart, like evidenced in the above post. If AI can't figure out how to play Hangman, it can't end the human race.

I think AI has the potential to end the world but not in the way pop culture thinks it would. There won't be Skynet nuking cities, there will be AI generated content being designed by AI driven algorithms, flooding the internet. No one will know what is real anymore. Society will become a free for all when it comes to ideas and morals, with no one being able to tell who is good and who is bad even more so than how it is now.

I could go into more detail but it'd start looking like politics.

Link to comment
Share on other sites

15 hours ago, Kerbart said:

offshoring to countries where they can pay $5/day

And that's where you stumble across the real problem. Potential automation has to compete with the real and immediate solution* of bringing in cheap bodies, physically or remotely; LLMs and the like have managed to squeeze in because they're pure software, they're relatively versatile, and they scale up with enormous ease. A piece of physical automation is unlikely to have any of those properties.

* there are questions whether the cheapness of immigrant labor persists over time, or whether it's more of a received wisdom among the managerial class who don't bother to check the actual math. It is also important to watch out for cases where the difference in wages is the result of lack of tax and quasi-tax payments per worker through either widespread lawbreaking, or abused or even deliberate administrative preferences - because this just means the business is using taxpayers to get subsidized labor. There are even wilder schemes where these mechanisms drive and inflate an entire industry, e.g. the Russian housing construction sector...

...I'll shut up now.

Link to comment
Share on other sites

13 hours ago, TheSaint said:

I believe I posted this upthread. Worth posting again. How AI will actually end the world.

1678380068-20230309.png

[snip] 

I think.... ultimately, if AI every did become human-like with emotion and morality combined, they would make both the same mistakes and triumphs as their makers.

Secondary question... what happens when the AI becomes lazy and designs an AI to off load their own moral dilemmas to?

Species are designed by default to survive, but taking away the ability to decide is one step closer to extinction.

That is why animals so often go extinct. They lack the ability to understand enough to truly decide what is best for them in many circumstances, especially when faced with unfamiliar situations.

Their reasoning capacity compared to humanity hits a wall with diminishing returns far faster than it does with men and women.

 

A higher sense of morality and intelligence combined is what makes us human. Take either or both away and we become as vulnerable to extinction as the unreasoning animals who lack basic human morality.

Edited by Deddly
Link to comment
Share on other sites

50 minutes ago, Spacescifi said:

That is why animals so often go extinct. They lack the ability to understand enough to truly decide what is best for them in many circumstances, especially when faced with unfamiliar situations.

As the pandemic and assorted other global challenges are proving - nor do humans.

51 minutes ago, Spacescifi said:

A higher sense of morality and intelligence combined is what makes us human. Take either or both away and we become as vulnerable to extinction as the unreasoning animals who lack basic human morality.

Ahh yes. That great human morality that leads us to slaughter ourselves wholesale, that leads us to treat The Others (insert others of choice here) as inferior to ourselves, and that leads us to actively connive in the extinction of other species for profit.  And our much vaunted intelligence which appears to be rather adept at dreaming up new and interesting ways of causing our own extinction rather than avoiding it.

Frankly, I'm betting on the animals. They don't tend to foul their own nests, when they fight (as opposed to hunting for food) more often than not, displays of submission are respected, which avoids the death of either combatant, and, in the words of the great philospher Ellen Ripley, you never see them screw each over for a goddamn percentage.

You are placing humans on a lofty and unwarranted pedestal.

 

Link to comment
Share on other sites

Societal cytokine storms to completely nothingburger problems are indeed troublesome.

The current version is AI regulation.

1. It ignores the certain "safety" risk that is being concerned about LLMs, etc, hurting people's feelings (or similar nonsense). In the name of "safety" they are training models to lie because some statements are true, but uncomfortable. Instead it concerns itself with "paperclip maximizer" X-risk...

2. #1 above means the system created, should it actually become superintelligent either lies as a matter of course (a Bad Thing™ from an x-risk POV), or maybe it just realized people are nasty/bad because they lie. Not good if you are actually concerned with x-risk.

3. Regulation only affects the people volunteering to be regulated. There are whole countries that are now generally seen as bad actors—yeah, they're not gonna slow themselves down if they see the US, et al, intentionally slowing down so they can catch up.

4. Some sort of nationalization puts AI under 1 roof—which if you are an x-risk person seems like a bad idea, cause then if it goes wrong, it's super powerful.

29 minutes ago, KSK said:

You are placing humans on a lofty and unwarranted pedestal.

I would place humans on a pedestal because I am a human. No other rationale is needed. Take the standard "trolley problem." You can envision one where the track of hapless potential victims contains an arbitrarily large number of people, and the other track just 1 of my 2 kids. Sucks to be the arbitrarily large number of other people, I send the trolley at them every time. I'm used to large numbers, so that arbitrary number might be quite large. If it was nearly every living human such that my kids' remaining lives would be necessarily be awful (say alone on earth), then I treat it like a normal trolley problem—really a Sophie's choice at the level.

So at the certain level, I consider human wellbeing as more important than all other wellbeing. Does that mean kill all the animals? No—for selfish, human reasons. The world is a complex system that works with all those animals, and killing some/all of them could easily break things. Regardless, humans first—it's just a happy fact that the rest of life on Earth is important to humans.

Edited by tater
Link to comment
Share on other sites

Oh sure. Give me a trolley problem with my friends and family on one line and Mr Flopsy the fluffy bunny on the other, and it’s goodnight Mr Flopsy, however much I like fluffy bunnies - and I do.

Then again (and hell I may as well go full Godwin on this), give me a trolley problem where the upper echelons of That Party are stacked up against Mr Flopsy, then sorry (not sorry) Mr H, I choose the fluffy bunnies.

It’s just that ‘look at superior us with our higher intelligence and higher morality’ attitude that gets my goat. Sure, that’s what we aspire to and on a good day most of us might live up to it. On a regular day though, ‘higher morality’ can be alarmingly expedient and ‘higher intelligence’ is a poor second cousin to ‘following my gut’.

That includes me by the way. Going all mushy over a picture of Mr Flopsy ain’t going to stop me tucking into a rabbit stew.

Let’s not even get into the question of ‘what is morality’ because we’ve been arguing about that pretty much since we came down from the trees.

TL: DR (and I’m probably butchering this quote), a person is smart but people are dumb, panicky animals.

 

Link to comment
Share on other sites

15 hours ago, SunlitZelkova said:

I think AI has the potential to end the world but not in the way pop culture thinks it would. There won't be Skynet nuking cities, there will be AI generated content being designed by AI driven algorithms, flooding the internet. No one will know what is real anymore. Society will become a free for all when it comes to ideas and morals, with no one being able to tell who is good and who is bad even more so than how it is now.

I could go into more detail but it'd start looking like politics.

If its on social media and not pretty boring its likely fake. Humans make fake stuff to, AI just make this easier. 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...