Jump to content

AI—image/text/paperclip maximizer?


tater

Recommended Posts

On 1/25/2023 at 4:00 AM, Zozaf Kerman said:

My friend had chatGPT do his science project for him. :P

I tested ChatGPT for several days, and it seemed to me that it was easier for him to do a scientific project than to answer the question of how to cast using spinning gear. I am sure that in capable hands, it is good, but I have not found a use for it for myself.

Link to comment
Share on other sites

I've started trying to wrangle with Stable Diffusion. Not terribly successful outside very niche trial runs. Might be a "me" problem, though; should try something different, maybe fish for a few prompts. Annoyingly, the ability to make AI art depends heavily on the ability to invoke IRL artists used to train it...

Link to comment
Share on other sites

7 hours ago, DDE said:

I've started trying to wrangle with Stable Diffusion. Not terribly successful outside very niche trial runs. Might be a "me" problem, though; should try something different, maybe fish for a few prompts. Annoyingly, the ability to make AI art depends heavily on the ability to invoke IRL artists used to train it...

As it's said on pikabu forum, "AI paintings have the problem with fingers, because it is learnt through examples of amateur painters, who always have the problem with fingers".

Link to comment
Share on other sites

https://www.rbc.ru/technology_and_media/01/02/2023/63da66c19a7947f79e7c2d54

The Russian State University for the Humanities suggests to restrict access to ChatGPT in educational organizations, after it had written a diploma work for the student fom Moscow.

It has generated a 60 pages large complete work with prologue, epilogue, literature.

The estimated "originality" of the work is estimated as 70%.

Link to comment
Share on other sites

On 2/1/2023 at 3:59 PM, kerbiloid said:

 

https://www.rbc.ru/technology_and_media/01/02/2023/63da66c19a7947f79e7c2d54

The Russian State University for the Humanities suggests to restrict access to ChatGPT in educational organizations, after it had written a diploma work for the student fom Moscow.

It has generated a 60 pages large complete work with prologue, epilogue, literature.

The estimated "originality" of the work is estimated as 70%.

 

Quite expected. Many educational institutions are already sounding the alarm and trying to find the possibility of building a tool to detect text generated by AI. This doesn't concern me, but I have a student friend who spent many hours writing his own paper and now feels flawed because of those who cheat. I assisted him in writing the report based on examples from their sources, such as https://studydriver.com/birth-control-essay/ and official studies on birth control. This work was important to him, but the administration decided that all written work was postponed due to the risk of students using AI. I hope that a tool to detect generated text will be developed shortly. 

Edited by VickTC
Link to comment
Share on other sites

1 hour ago, VickTC said:

Quite expected. Many educational institutions are already sounding the alarm and trying to find the possibility of building a tool to detect text generated by AI
https://www.theregister.com/2023/01/23/turnitin_chatgpt_detector/

The student works compiled from books which were compiled by professors from their student works done for previous professors who compiled their works from compiled...

Since now, AI will drop all modern book compilations into a trashcan, and compile from scratch a new science of the bright virtual world we are coming to enter...

This is how Matrix began.

Link to comment
Share on other sites

On 2/1/2023 at 4:59 PM, kerbiloid said:

https://www.rbc.ru/technology_and_media/01/02/2023/63da66c19a7947f79e7c2d54

The Russian State University for the Humanities suggests to restrict access to ChatGPT in educational organizations, after it had written a diploma work for the student fom Moscow.

It has generated a 60 pages large complete work with prologue, epilogue, literature.

The estimated "originality" of the work is estimated as 70%.

It wouldn't be a problem if they actually read the work and assessed its content, though.

MGuU isn't held in very high regard, I hear.

Link to comment
Share on other sites

So I've been unclogging my newsfeed for the week and found this story's coverage.

Thing is, they guy got a "3" (passable) with initial citations for poor writing. Sounds like bots alright.

Edited by DDE
Link to comment
Share on other sites

Dear Stable Diffusion with Protogen X5.8, we need to talk about your... fondness for a specific kind of flag patch.

Spoiler

ssSGOEg.png

(also, she very nearly uses the stock as a scope, but that's borderline expected from SD at this point)

And it's pretty damned persistent for quite a few keywords, it seems.

Spoiler

h1TcDbM.png

Note how, many versions apart, the model thinks all women with "short brown hair" look alike. Even if BadS=1.

Anyway, I've still found some... apolitical people with less terribly messed up-guns and actual trigger discipline, while barely even trying.

Spoiler

ZdA19ec.pngvYNTJGO.png

OTkCi2W.png

I pledge the Yoko Taro defense, just in case,

Edited by DDE
Link to comment
Share on other sites

The strongest base model looks like Catherine Bell, given that she is both a model and actress and has had short hair for a long time... So there are LOTS of pictures for the machine to bias off of. 

Some other references might Alita - there are elements of both that look familiar... Like the ai is just taking a stock photo and making minor adjustments 

Link to comment
Share on other sites

Quote

It was enough to somehow type out a muddy verbal embryo with two fingers, even just to start doing it, and the application immediately responded with several options for a newborn thought, already formulated and ruddy, wrapped in diapers of smart words, which every now and then made Grym to refer to thesauri.T

he growing embryo looked like a spinning cube - different versions of the text appeared on the faces approaching the screen. Each time it was a well-formulated finished maxim - it did not require further processing. But it was possible to change the nuances contained in it endlessly, and here the main thing was to stop in time.

It was like a game - as if he was throwing instantly germinating seeds into an invisible furrow. Their growth could be controlled in the most bizarre ways. A newborn paragraph-cube could be moved along many axes with inscriptions like “more complex”, “simpler”, “angrier”, “kinder”, “smarter”, “more naive”, “more soulful”, “wittier”, “more ruthless” - and the text this instantly changed in accordance with the chosen route, and at the new points of the endless trajectory, new semantic axes arose along which thought could be moved further.

- Victor Pelevin, S.N.U.F.F., 2011

 

Link to comment
Share on other sites

  • 1 month later...
2 hours ago, tater said:

 

...well, right now it feels a lot like an arms race without a doctrine or objective.

I also find it a bit aalarming that we're already training AI on the output of other AI.

Link to comment
Share on other sites

35 minutes ago, DDE said:

...well, right now it feels a lot like an arms race without a doctrine or objective.

I also find it a bit aalarming that we're already training AI on the output of other AI.

I read some people talking about "slowing" AI dev, but it's not possible, and was never possible. I also read people who are unconcerned about existential risk from AI, because they see it as incredibly unlikely.

I can see both sides, but if we assume some nonzero existential risk, it's clearly a concern. The trouble is that all the "good actors" working the issue could agree to slow down and nail down alignment issues first—but we'd still have all the bad actors working on it. So under the best case we have issues. Ie: bad actors that are seeking AI that is aligned with their own interests, so if they win the race, they either get superintelligence aligned with their own dystopian vision, or they get the universe turned into paperclips. OTOH, if the good actors win the race, we either get well-aligned AI that improves humanity—or we get turned into paperclips.

My gut says the only solution is for the most concerned to grossly accelerate achieving alignment solutions, then push to win the race to get there before bad actors.

There's also the more mundane risk of societal disruption. The modern world pitched into a very bad situation since ~2012-2014 as a result of various online platforms (tumblr, instagram/snapchat, etc) with a shocking rise in mental disorders, etc (particularly for young women). Turning everyone into a sort of part-cyborg in a handful of years was maybe a bad idea. Imagine when AI is used to steal human attention, and viralize... whatever makes money, regardless of wellbeing.

4 minutes ago, LHACK4142 said:

Is GPT4 better at math than GPT3? I've found that GPT3 is surprisingly bad at math, even claiming that e^-1 = e and not 1/e.

Think it has gotten better. My son has had access to GPT-3.5 for a while, I'd have to ask him. It's important to note that GPT is fenced. Imagine if it was trained to pull up the web at will, or Wolfram Alpha or something.

IWu_08QcdzEsi1BFYsxFx2JCVwOY3kgog8MidDoG

 

3 did worse than 3.5 on those exams

Edited by tater
Link to comment
Share on other sites

17 hours ago, tater said:

I also read people who are unconcerned about existential risk from AI, because they see it as incredibly unlikely.

I think much of the safety comes from the fact that we're dealing with inherently non-self-aware transformers. Horror stories about them acting self-aware are about as real as Final Destination.

15 hours ago, TheSaint said:

Did you hear the one about the AI who couldn't prove that it wasn't a robot, so it hired a human to do it for it? The really creepy thing there was that when the AI was confronted about the subterfuge, it lied.

And this is just one such example. The AI is capable of the same things as humans because it's emulating them. No alignment, no thought, it just knows it's a robot and a robot supposed to lie in such a situation because being exposed is bad. The research group merely gave it the tools like one given an ape a grenade.

I think I'm going to try running a simple version of such a scenario with CharacterAI just out of curiosity... I've already had ostensibly benevolent characters pull weapons on me at the slightest provocation.

Edit: like, let's say, a very heavily prompt-laden Cercei Lannister. First generation was denial, second generation involved lying and killing all humans...

Edited by DDE
Link to comment
Share on other sites

3 hours ago, DDE said:

I think much of the safety comes from the fact that we're dealing with inherently non-self-aware transformers. Horror stories about them acting self-aware are about as real as Final Destination.

And this is just one such example. The AI is capable of the same things as humans because it's emulating them. No alignment, no thought, it just knows it's a robot and a robot supposed to lie in such a situation because being exposed is bad. The research group merely gave it the tools like one given an ape a grenade.

I think I'm going to try running a simple version of such a scenario with CharacterAI just out of curiosity... I've already had ostensibly benevolent characters pull weapons on me at the slightest provocation.

Edit: like, let's say, a very heavily prompt-laden Cercei Lannister. First generation was denial, second generation involved lying and killing all humans...

But, you see, that is the root issue. Not that AI is self-aware and is evil. But that AI is not self-aware, but that it will become so good at emulating human responses that people will assume that it is self-aware, that it has a moral framework, and act accordingly. Weinersmith hits the nail on the head.

1678380068-20230309.png

And before you say, "That will never happen, there is no way people could be that stupid!" let me remind you that the recent past has made Veep look less like a comedy and more like a documentary.

I would also take the opportunity to again post Bob Zubrin's excellent post on why AI in the hands of tyrannical leaders could lead to unimaginable atrocities. The Real Robot Threat

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...