VickTC Posted January 30, 2023 Share Posted January 30, 2023 On 1/25/2023 at 4:00 AM, Zozaf Kerman said: My friend had chatGPT do his science project for him. I tested ChatGPT for several days, and it seemed to me that it was easier for him to do a scientific project than to answer the question of how to cast using spinning gear. I am sure that in capable hands, it is good, but I have not found a use for it for myself. Quote Link to comment Share on other sites More sharing options...
DDE Posted January 30, 2023 Share Posted January 30, 2023 I've started trying to wrangle with Stable Diffusion. Not terribly successful outside very niche trial runs. Might be a "me" problem, though; should try something different, maybe fish for a few prompts. Annoyingly, the ability to make AI art depends heavily on the ability to invoke IRL artists used to train it... Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted January 31, 2023 Share Posted January 31, 2023 7 hours ago, DDE said: I've started trying to wrangle with Stable Diffusion. Not terribly successful outside very niche trial runs. Might be a "me" problem, though; should try something different, maybe fish for a few prompts. Annoyingly, the ability to make AI art depends heavily on the ability to invoke IRL artists used to train it... As it's said on pikabu forum, "AI paintings have the problem with fingers, because it is learnt through examples of amateur painters, who always have the problem with fingers". Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted January 31, 2023 Share Posted January 31, 2023 The meatbags keep resisting. https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers Quote Link to comment Share on other sites More sharing options...
DDE Posted January 31, 2023 Share Posted January 31, 2023 2 hours ago, kerbiloid said: The meatbags keep resisting. https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers Yeah, but how are they hoping to know the text was ghostwritten by a machine, heh. Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted February 1, 2023 Share Posted February 1, 2023 https://www.rbc.ru/technology_and_media/01/02/2023/63da66c19a7947f79e7c2d54 The Russian State University for the Humanities suggests to restrict access to ChatGPT in educational organizations, after it had written a diploma work for the student fom Moscow. It has generated a 60 pages large complete work with prologue, epilogue, literature. The estimated "originality" of the work is estimated as 70%. Quote Link to comment Share on other sites More sharing options...
VickTC Posted February 2, 2023 Share Posted February 2, 2023 (edited) On 2/1/2023 at 3:59 PM, kerbiloid said: https://www.rbc.ru/technology_and_media/01/02/2023/63da66c19a7947f79e7c2d54 The Russian State University for the Humanities suggests to restrict access to ChatGPT in educational organizations, after it had written a diploma work for the student fom Moscow. It has generated a 60 pages large complete work with prologue, epilogue, literature. The estimated "originality" of the work is estimated as 70%. Quite expected. Many educational institutions are already sounding the alarm and trying to find the possibility of building a tool to detect text generated by AI. This doesn't concern me, but I have a student friend who spent many hours writing his own paper and now feels flawed because of those who cheat. I assisted him in writing the report based on examples from their sources, such as https://studydriver.com/birth-control-essay/ and official studies on birth control. This work was important to him, but the administration decided that all written work was postponed due to the risk of students using AI. I hope that a tool to detect generated text will be developed shortly. Edited February 25, 2023 by VickTC Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted February 2, 2023 Share Posted February 2, 2023 1 hour ago, VickTC said: Quite expected. Many educational institutions are already sounding the alarm and trying to find the possibility of building a tool to detect text generated by AI https://www.theregister.com/2023/01/23/turnitin_chatgpt_detector/ The student works compiled from books which were compiled by professors from their student works done for previous professors who compiled their works from compiled... Since now, AI will drop all modern book compilations into a trashcan, and compile from scratch a new science of the bright virtual world we are coming to enter... This is how Matrix began. Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted February 2, 2023 Share Posted February 2, 2023 One of the worst things ever has happened. AI starts generating sitcoms... https://en.wikipedia.org/wiki/Nothing,_Forever Spoiler Bye, little planet... Quote Link to comment Share on other sites More sharing options...
DDE Posted February 3, 2023 Share Posted February 3, 2023 On 2/1/2023 at 4:59 PM, kerbiloid said: https://www.rbc.ru/technology_and_media/01/02/2023/63da66c19a7947f79e7c2d54 The Russian State University for the Humanities suggests to restrict access to ChatGPT in educational organizations, after it had written a diploma work for the student fom Moscow. It has generated a 60 pages large complete work with prologue, epilogue, literature. The estimated "originality" of the work is estimated as 70%. It wouldn't be a problem if they actually read the work and assessed its content, though. MGuU isn't held in very high regard, I hear. Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted February 3, 2023 Share Posted February 3, 2023 24 minutes ago, DDE said: It wouldn't be a problem if they actually read the work and assessed its content, though. Maybe, they used ChatGPT for that, too... Quote Link to comment Share on other sites More sharing options...
DDE Posted February 3, 2023 Share Posted February 3, 2023 (edited) So I've been unclogging my newsfeed for the week and found this story's coverage. Thing is, they guy got a "3" (passable) with initial citations for poor writing. Sounds like bots alright. Edited February 3, 2023 by DDE Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted February 3, 2023 Share Posted February 3, 2023 Interesting, can AI into chemistry? The chemistry is just a set of letters and arrows, exactly what it prefers. *** Idk about painters, but poets may start worrying. Quote Link to comment Share on other sites More sharing options...
DDE Posted February 6, 2023 Share Posted February 6, 2023 (edited) Dear Stable Diffusion with Protogen X5.8, we need to talk about your... fondness for a specific kind of flag patch. Spoiler (also, she very nearly uses the stock as a scope, but that's borderline expected from SD at this point) And it's pretty damned persistent for quite a few keywords, it seems. Spoiler Note how, many versions apart, the model thinks all women with "short brown hair" look alike. Even if BadS=1. Anyway, I've still found some... apolitical people with less terribly messed up-guns and actual trigger discipline, while barely even trying. Spoiler I pledge the Yoko Taro defense, just in case, Edited February 6, 2023 by DDE Quote Link to comment Share on other sites More sharing options...
JoeSchmuckatelli Posted February 7, 2023 Share Posted February 7, 2023 The strongest base model looks like Catherine Bell, given that she is both a model and actress and has had short hair for a long time... So there are LOTS of pictures for the machine to bias off of. Some other references might Alita - there are elements of both that look familiar... Like the ai is just taking a stock photo and making minor adjustments Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted February 8, 2023 Share Posted February 8, 2023 AI music (they say) Spoiler Quote Link to comment Share on other sites More sharing options...
DDE Posted February 13, 2023 Share Posted February 13, 2023 Quote It was enough to somehow type out a muddy verbal embryo with two fingers, even just to start doing it, and the application immediately responded with several options for a newborn thought, already formulated and ruddy, wrapped in diapers of smart words, which every now and then made Grym to refer to thesauri.T he growing embryo looked like a spinning cube - different versions of the text appeared on the faces approaching the screen. Each time it was a well-formulated finished maxim - it did not require further processing. But it was possible to change the nuances contained in it endlessly, and here the main thing was to stop in time. … It was like a game - as if he was throwing instantly germinating seeds into an invisible furrow. Their growth could be controlled in the most bizarre ways. A newborn paragraph-cube could be moved along many axes with inscriptions like “more complex”, “simpler”, “angrier”, “kinder”, “smarter”, “more naive”, “more soulful”, “wittier”, “more ruthless” - and the text this instantly changed in accordance with the chosen route, and at the new points of the endless trajectory, new semantic axes arose along which thought could be moved further. - Victor Pelevin, S.N.U.F.F., 2011 Quote Link to comment Share on other sites More sharing options...
tater Posted March 21, 2023 Author Share Posted March 21, 2023 Quote Link to comment Share on other sites More sharing options...
DDE Posted March 21, 2023 Share Posted March 21, 2023 2 hours ago, tater said: ...well, right now it feels a lot like an arms race without a doctrine or objective. I also find it a bit aalarming that we're already training AI on the output of other AI. Quote Link to comment Share on other sites More sharing options...
LHACK4142 Posted March 21, 2023 Share Posted March 21, 2023 Is GPT4 better at math than GPT3? I've found that GPT3 is surprisingly bad at math, even claiming that e^-1 = e and not 1/e. Quote Link to comment Share on other sites More sharing options...
tater Posted March 21, 2023 Author Share Posted March 21, 2023 (edited) 35 minutes ago, DDE said: ...well, right now it feels a lot like an arms race without a doctrine or objective. I also find it a bit aalarming that we're already training AI on the output of other AI. I read some people talking about "slowing" AI dev, but it's not possible, and was never possible. I also read people who are unconcerned about existential risk from AI, because they see it as incredibly unlikely. I can see both sides, but if we assume some nonzero existential risk, it's clearly a concern. The trouble is that all the "good actors" working the issue could agree to slow down and nail down alignment issues first—but we'd still have all the bad actors working on it. So under the best case we have issues. Ie: bad actors that are seeking AI that is aligned with their own interests, so if they win the race, they either get superintelligence aligned with their own dystopian vision, or they get the universe turned into paperclips. OTOH, if the good actors win the race, we either get well-aligned AI that improves humanity—or we get turned into paperclips. My gut says the only solution is for the most concerned to grossly accelerate achieving alignment solutions, then push to win the race to get there before bad actors. There's also the more mundane risk of societal disruption. The modern world pitched into a very bad situation since ~2012-2014 as a result of various online platforms (tumblr, instagram/snapchat, etc) with a shocking rise in mental disorders, etc (particularly for young women). Turning everyone into a sort of part-cyborg in a handful of years was maybe a bad idea. Imagine when AI is used to steal human attention, and viralize... whatever makes money, regardless of wellbeing. 4 minutes ago, LHACK4142 said: Is GPT4 better at math than GPT3? I've found that GPT3 is surprisingly bad at math, even claiming that e^-1 = e and not 1/e. Think it has gotten better. My son has had access to GPT-3.5 for a while, I'd have to ask him. It's important to note that GPT is fenced. Imagine if it was trained to pull up the web at will, or Wolfram Alpha or something. 3 did worse than 3.5 on those exams Edited March 21, 2023 by tater Quote Link to comment Share on other sites More sharing options...
TheSaint Posted March 22, 2023 Share Posted March 22, 2023 Did you hear the one about the AI who couldn't prove that it wasn't a robot, so it hired a human to do it for it? The really creepy thing there was that when the AI was confronted about the subterfuge, it lied. Quote Link to comment Share on other sites More sharing options...
DDE Posted March 22, 2023 Share Posted March 22, 2023 (edited) 17 hours ago, tater said: I also read people who are unconcerned about existential risk from AI, because they see it as incredibly unlikely. I think much of the safety comes from the fact that we're dealing with inherently non-self-aware transformers. Horror stories about them acting self-aware are about as real as Final Destination. 15 hours ago, TheSaint said: Did you hear the one about the AI who couldn't prove that it wasn't a robot, so it hired a human to do it for it? The really creepy thing there was that when the AI was confronted about the subterfuge, it lied. And this is just one such example. The AI is capable of the same things as humans because it's emulating them. No alignment, no thought, it just knows it's a robot and a robot supposed to lie in such a situation because being exposed is bad. The research group merely gave it the tools like one given an ape a grenade. I think I'm going to try running a simple version of such a scenario with CharacterAI just out of curiosity... I've already had ostensibly benevolent characters pull weapons on me at the slightest provocation. Edit: like, let's say, a very heavily prompt-laden Cercei Lannister. First generation was denial, second generation involved lying and killing all humans... Edited March 22, 2023 by DDE Quote Link to comment Share on other sites More sharing options...
TheSaint Posted March 22, 2023 Share Posted March 22, 2023 3 hours ago, DDE said: I think much of the safety comes from the fact that we're dealing with inherently non-self-aware transformers. Horror stories about them acting self-aware are about as real as Final Destination. And this is just one such example. The AI is capable of the same things as humans because it's emulating them. No alignment, no thought, it just knows it's a robot and a robot supposed to lie in such a situation because being exposed is bad. The research group merely gave it the tools like one given an ape a grenade. I think I'm going to try running a simple version of such a scenario with CharacterAI just out of curiosity... I've already had ostensibly benevolent characters pull weapons on me at the slightest provocation. Edit: like, let's say, a very heavily prompt-laden Cercei Lannister. First generation was denial, second generation involved lying and killing all humans... But, you see, that is the root issue. Not that AI is self-aware and is evil. But that AI is not self-aware, but that it will become so good at emulating human responses that people will assume that it is self-aware, that it has a moral framework, and act accordingly. Weinersmith hits the nail on the head. And before you say, "That will never happen, there is no way people could be that stupid!" let me remind you that the recent past has made Veep look less like a comedy and more like a documentary. I would also take the opportunity to again post Bob Zubrin's excellent post on why AI in the hands of tyrannical leaders could lead to unimaginable atrocities. The Real Robot Threat Quote Link to comment Share on other sites More sharing options...
kerbiloid Posted March 22, 2023 Share Posted March 22, 2023 Why not generate new episodes of Star Warz on demand? They anyway have turned into an endless recursive mess, and 90% of characters are CGI so noone will see difference, but it's much cheaper, customizable, and thus immersive. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.