Jump to content

Why I Do Not Fear AI...


Recommended Posts

On 6/20/2023 at 8:52 PM, DDE said:

It's an undocumented feature.

  Hide contents

photo_2023-02-06_11-35-07.jpg

 

Now that confused me a lot as I thought 155 mm artillery shells and square cube law. 40 mm and you have smart cluster  bomblets. 

Link to comment
Share on other sites

https://github.com/mshumer/gpt-author/

Quote

This project utilizes a chain of GPT-4 and Stable Diffusion API calls to generate an original fantasy novel. Users can provide an initial prompt and enter how many chapters they'd like it to be, and the AI then generates an entire novel, outputting an EPUB file compatible with e-book readers.

A 15-chapter novel can cost as little as $4 to produce, and is written in just a few minutes.

 

Link to comment
Share on other sites

43 minutes ago, tater said:

https://github.com/mshumer/gpt-author/

This project utilizes a chain of GPT-4 and Stable Diffusion API calls to generate an original fantasy novel. Users can provide an initial prompt and enter how many chapters they'd like it to be, and the AI then generates an entire novel, outputting an EPUB file compatible with e-book readers.

A 15-chapter novel can cost as little as $4 to produce, and is written in just a few minutes.

Now I don't think this will generate high quality literature. It would however do wonders for stuff like generating side quests in video games. 
An endless number of story based quests of very decent quality. Or you simply generate some hundreds and select the 100 best to bake into the game. 

Link to comment
Share on other sites

3 hours ago, magnemoe said:

Now I don't think this will generate high quality literature. It would however do wonders for stuff like generating side quests in video games. 
An endless number of story based quests of very decent quality. Or you simply generate some hundreds and select the 100 best to bake into the game. 

What % of the fantasy novels written are “high quality literature?”

5%? Lower? 1 in 1000?

The openai servers host many at the same time and create many thousands quickly.

next some training on which are better… this will become far, far better, and quickly.

 

This will leverage people with good basic ideas if they are smart. Prompt your interesting story ideas, read the result, and if workable, it can always be edited. If you already have work, train it on "your voice" so it writes in your existing style.

Someone tell George RR Martin so he can use it to finish a Song of Ice and Fire ;)

 

Link to comment
Share on other sites

11 hours ago, tater said:
Quote

This project utilizes a chain of GPT-4 and Stable Diffusion API calls to generate an original fantasy novel. Users can provide an initial prompt and enter how many chapters they'd like it to be, and the AI then generates an entire novel, outputting an EPUB file compatible with e-book readers.

A 15-chapter novel can cost as little as $4 to produce, and is written in just a few minutes.

 

Bye, Holy Wood plot-writers. Here is somebody else to roll the dice and write them in words, like you do.

Actually, this was possible much earlier.

Stanislaw' Lem, "A Pocket Computer of  Sci-Fi Fans", from "Fiction and Futurology", published in 1973.
Roll the dice and follow the choice to create the plot.

Can't find an English version, only the Russian one, and the original was in Polish. So, I've translated it here myself.
Though, the English version definitely was, as in 1976 Lem was expelled from the US sci-fi writers association, and they say, due to this scheme.

Spoiler

167986024612496337.webp

 

1gVVLGu.jpg

Edited by kerbiloid
Link to comment
Share on other sites

4 hours ago, tater said:

What % of the fantasy novels written are “high quality literature?”

5%? Lower? 1 in 1000?

The openai servers host many at the same time and create many thousands quickly.

next some training on which are better… this will become far, far better, and quickly.

This will leverage people with good basic ideas if they are smart. Prompt your interesting story ideas, read the result, and if workable, it can always be edited. If you already have work, train it on "your voice" so it writes in your existing style.

Someone tell George RR Martin so he can use it to finish a Song of Ice and Fire ;)

To me, this is functionally equivalent to that old meme:

Step 1:   Idea.
Step 2:   ???
Step 3:   Profit!

For Step 2, you can substitute '???' for 'training on which are better'. 

Writing and editing is a complex process and not one that's amenable to being broken down into a set of rules because, in practice, most  writing 'rules' are just guidelines. They're useful general principles but knowing when not to use them is as important as knowing when to use them. Knowing how to use them is another matter entirely.

Consider the well known 'show don't tell' maxim. Great rule - until it isn't.  Sometimes it's easier - and better - to just tell your reader something rather than wrecking the flow or pacing of the story by trying to show it. Even if showing is the better option, how to show is a judgment call. At the very least it requires some understanding of a character, how they're likely to respond to a situation, and how to make that response believable.  And that's before we get into consistent characterisation, and character development over a story arc - a believable response at the beginning of a story may not be a believable response at the end of the same story.

LLMs are not (to my understanding at least) set up for this.  They have no concept or understanding of a story beyond a collection of words arranged in a plausible way according to their training data. And without that understanding 'training on which are better' is an oxymoron, because an LLM is incapable of judging what is 'better'. 

As a wise person once said to me: "I can tell you how to make that chapter different but I can't necessarily tell you how to make it better."

This, I think, is what will happen with LLM generated stories - it'll be easy enough to generate variations on a given story but generating 'better ones'? Not so much. Even if you have an army of volunteers reading the stuff that the LLM spits out, grading them for quality, and using those grades to somehow train the LLM, it's not going to work. For openers, good luck getting any two volunteers to agree on which story is better but, more importantly, whatever qualities made version 2543 of a given story better than version 1723,  are probably not going to help much in making the next story any better. 

Likewise for 'training an LLM' on my voice. That training supposes that I've written enough stories having the same general plot and characters, that the LLM has a decent data set to use when writing the next story of the same type.  Which isn't going to happen and I think it's unlikely to happen even for very prolific authors. 

TL: DR version.  I found the above post to be a glib understatement of what goes into any sort of creative writing process (even potboiler novels). Apologies to @tater if that's not how it was intended.

Link to comment
Share on other sites

They should create a recursive+fractal object tree with verbs for transition, in a constructed mark-up language, to generate the story.

Then implement its verbalization into a custom human language.

I believe, it won't be worse than 99% of biologically written plots and stories...

Also, the PC text quests prove that 90% of visualisation is made by the reader imagination.
So, just a customized readable thing, following the reader desires, will be readable very well, like a foreign language story in amateur translation.

To match the reader preferences, the model should read the reader's favourite stories to follow the writing style and words, which the reader prefers.

Thus, a once-generated story in mark-up language can be verbalized very differently for different readers.
(In Shakespeare style for one, in leet-speak and emojies for another one.)

They should split the story into standard basic actions, to fill the primitives with customized/random details
(Action: "%Person#1% %punch% %Person#2% with "Object%1"" → "%Person#1% %receive_object% %Object1%" %concatenate_action% %Person#1%  %random_punch% {target=%Person2%, tool="%current_object%"} + %random_ornamentation%) → "Jill took the book from the table, and accurately slapped Jack's head."

Edited by kerbiloid
Link to comment
Share on other sites

1 hour ago, kerbiloid said:

"Jill took the book from the table, and accurately slapped Jack's head."

Now we know why Jack fell down and broke his crown.  In one of an infinity of parallel fractal storylines anyway.

Stories are what bind societies and culture.  When we are spoon fed AI personalized stories and narratives, what happens?  In the best stories, the protagonist doesn't just face his "favorite" flavors of reality, but faces reality as it stands. 

Why would I want a game AI to cater to my "favorites" when I know the world doesn't center on my "favorites"?  I, and most people, don't need help being self-centered.  Stories that have survived for millennia are typically about becoming less self centered while still valuing the self.  As if they had survival value for a cultural species, omg!  ;)

Link to comment
Share on other sites

38 minutes ago, darthgently said:

Why would I want a game AI to cater to my "favorites" when I know the world doesn't center on my "favorites"?  I, and most people, don't need help being self-centered.  Stories that have survived for millennia are typically about becoming less self centered while still valuing the self.

If you are interested in birds, but are indifferent to horses, unlikely you would prefer to read horse nomad sagas rather than bird catcher stories.

When I'm indifferent to the Argonians and Khajiites, and never focus on them in Skyrim, I'm definitely skipping the stories of the Argonian Maid, just from pure human racism.
(Thank you, Divine Divinity, you have killed the hobbits, dwarves, and elves for me, as after this game I can't percept them other than needlessly excessive, caricature version of humans.
On the other hand, I'm really glad for this, as in ZX Spectrum era I was trying to imagine a dwarf submarine or a dwarf tank, which was appearing to be weird and needlessly excessive
).

When you don't want to read another snots-and-sugar story about human relations, and prefer bloody art of magic of WH40k, you don't want a ten-season  "pony" saga, and vice versa.

So, you actually just don't want exactly repeated stories, and it"s just a question of randomness interval width.

Edited by kerbiloid
Link to comment
Share on other sites

7 hours ago, KSK said:

TL: DR version.  I found the above post to be a glib understatement of what goes into any sort of creative writing process (even potboiler novels). Apologies to @tater if that's not how it was intended.

I didn't have time to write much when I posted that, I just found that link interesting, but I have given this particular idea (AI in art) a little thought over the last year.

I've never been good at creative writing, myself, though I read rather a lot of fiction. I'm genuinely unsure about how one sets off to write a novel. I recall Harlan Ellison explaining how he wrote every day. Treated it like a job, just got up and wrote. He used to write in store windows sometimes and would generate a short story from prompts written on a slip of paper by customers. All I can say is that a year ago when I heard about some of the generative "art" then messed around a little with midjourney, other than some fun images with their own bizarre "AI art style" it was fairly lousy. Fast forward a year... and it's amazingly capable.

I think in the case of LLMs and text, we're going to see continuous progress. The South Park, collect underpants, ???, profit meme... I'm not seeing training as some sort of magic here. A number of papers have shown that self-reflection produces better results in LLMs, as does asking it to "think step by step." Specific domain training—in this case fantasy writing, and specifically a dataset where humans have labeled works by quality—is a path they could certainly try. If it generated multiple stories on the same prompt, then asked to analyze them, I wonder how it would do? Could it in fact "pick a winner," or at least "out of these X stories, which one is likely to be considered the best written?" Then test with humans (people who have to read the stories). Short stories would be better here, because reading a whole "meh" fantasy novel is not on my list of things to do, much less a bunch of them.

If I was someone engaged in creative writing or digital art, I'd be pretty concerned, and/or learning these tools to leverage the ability I already had. Not saying the current github linked is that tool for fantasy writing, but looking forward 1, 2, 3 years, who knows?

Let's say George RR Martin could dump the text of all his published work he considered good in (loads of tokens, his books are long), then asked the model questions about them... Like "in Song of Ice and Fire, are there loose string plots that seem to exist?" (I can think of a few "Chekhov's gun" things I was waiting to see l that just disappeared). With all those works still in the model, ask it to write further chapters. Or ask it to summarize character arcs, then propose where the characters can go from here—keeping all the other characters in mind as they interact with each other. Martin apparently doesn't work off of an outline, but for this use case, maybe he does, and prompts it with an outline of directions he saw characters going it. Such an experiment would have to be done by an author I think—they could read the output, and have a feel for quality, and "voice" in a way that I would not (even if some of the output would fool me).

The proof of the puddling will be in the eating. We'll know what LLMs are capable of in fiction when something gets published that is sufficiently good. I'm old enough to remember when chess was a human intellectual pursuit that computers would never "get," then the goalpost moved after computers beat humans, to "of course computers could brute-force chess." Go, OTOH, was far harder... then that fell, and the goalposts move again. "It can make some pictures, but they're crappy, look at the extra fingers, etc, lol!" Then models start making photo-realistic images within a few months, and the goalpost moves again. LLM can write some boilerplate novels, but can't write anything with any emotional underpinning that will engage real readers... until it does.

 

Edited by tater
Link to comment
Share on other sites

5 hours ago, DDE said:

Once again, AI proves to be comparable to a lazy intern.

At face value, if the above statement is true, AI is shockingly powerful already.

Interns being human, after all, the suggestion is that it's equal to a meh human. A meh human that can work 24/7/365, using an interface that is just barely out of the gate. If current models were packed for easier use vs prompt engineering in a chat format, it would have far more utility. if it changes as much between when it was released and when the previous version was released... more powerful still.

Again, the change in months between what these systems were capable of is pretty amazing.

EDIT, I just got to where I could watch the vid:

I wonder if the script proposed in the video above was a zero shot, or if it was given previous scripts to base it off of? The latter to me would be the min effort.

Edited by tater
Link to comment
Share on other sites

1 hour ago, tater said:

At face value, if the above statement is true, AI is shockingly powerful already.

Interns being human, after all, the suggestion is that it's equal to a meh human. A meh human that can work 24/7/365, using an interface that is just barely out of the gate. If current models were packed for easier use vs prompt engineering in a chat format, it would have far more utility. if it changes as much between when it was released and when the previous version was released... more powerful still.

Again, the change in months between what these systems were capable of is pretty amazing.

Here I agree, question is there the next roadblock is, still remember Intel talking about Pentium 4 an 10 GHz cpu still single core and 32 bits I assume. And this was before the heat pipe coolers, they made the short lived BTX form factor of motherboards so they could get maximum airflow over the cpu. Then AMD went 64 bit multi core and the game changed again. 

Here the question is how far we can push this. 10 times better, sure but that might be expensive, 100 times I say that is the 10 GHz cpu. 
On the other hand 20 years ago AI was an joke and you could write an hard science fiction far future novel there you had more advanced Roomba's and automated lawn movers and not much more.  

Link to comment
Share on other sites

I think that the current UI is very "hobbyist" or "programmer" centric. Ie: the people really beating up GPT-4 and seeing what it can do tend to be technical people. As it becomes more usable (bing is an example of this), more use cases—and more flaws—will become apparent. The fact that there is such a profound difference in quality between "zero shot" answers and reflective responses, or step by step, or prompts where you ask it to ask you questions to improve its answer is telling here. At some point I would expect a model to automatically do this, limiting the need for "prompt crafting."

 

 

Link to comment
Share on other sites

On 6/26/2023 at 3:11 PM, tater said:

I didn't have time to write much when I posted that, I just found that link interesting, but I have given this particular idea (AI in art) a little thought over the last year.

I've never been good at creative writing, myself, though I read rather a lot of fiction. I'm genuinely unsure about how one sets off to write a novel. I recall Harlan Ellison explaining how he wrote every day. Treated it like a job, just got up and wrote. He used to write in store windows sometimes and would generate a short story from prompts written on a slip of paper by customers. All I can say is that a year ago when I heard about some of the generative "art" then messed around a little with midjourney, other than some fun images with their own bizarre "AI art style" it was fairly lousy. Fast forward a year... and it's amazingly capable.

I think in the case of LLMs and text, we're going to see continuous progress. The South Park, collect underpants, ???, profit meme... I'm not seeing training as some sort of magic here. A number of papers have shown that self-reflection produces better results in LLMs, as does asking it to "think step by step." Specific domain training—in this case fantasy writing, and specifically a dataset where humans have labeled works by quality—is a path they could certainly try. If it generated multiple stories on the same prompt, then asked to analyze them, I wonder how it would do? Could it in fact "pick a winner," or at least "out of these X stories, which one is likely to be considered the best written?" Then test with humans (people who have to read the stories). Short stories would be better here, because reading a whole "meh" fantasy novel is not on my list of things to do, much less a bunch of them.

If I was someone engaged in creative writing or digital art, I'd be pretty concerned, and/or learning these tools to leverage the ability I already had. Not saying the current github linked is that tool for fantasy writing, but looking forward 1, 2, 3 years, who knows?

Let's say George RR Martin could dump the text of all his published work he considered good in (loads of tokens, his books are long), then asked the model questions about them... Like "in Song of Ice and Fire, are there loose string plots that seem to exist?" (I can think of a few "Chekhov's gun" things I was waiting to see l that just disappeared). With all those works still in the model, ask it to write further chapters. Or ask it to summarize character arcs, then propose where the characters can go from here—keeping all the other characters in mind as they interact with each other. Martin apparently doesn't work off of an outline, but for this use case, maybe he does, and prompts it with an outline of directions he saw characters going it. Such an experiment would have to be done by an author I think—they could read the output, and have a feel for quality, and "voice" in a way that I would not (even if some of the output would fool me).

The proof of the puddling will be in the eating. We'll know what LLMs are capable of in fiction when something gets published that is sufficiently good. I'm old enough to remember when chess was a human intellectual pursuit that computers would never "get," then the goalpost moved after computers beat humans, to "of course computers could brute-force chess." Go, OTOH, was far harder... then that fell, and the goalposts move again. "It can make some pictures, but they're crappy, look at the extra fingers, etc, lol!" Then models start making photo-realistic images within a few months, and the goalpost moves again. LLM can write some boilerplate novels, but can't write anything with any emotional underpinning that will engage real readers... until it does.

 

Thanks for the thoughtful response @tater. Apologies for not replying sooner but I've been out of easy keyboard reach for the last couple of days. Also apologies again if I came on a bit strong in my last post - I do not like the recent wave of AI with everything and that probably showed.  To me the whole enterprise reeks of typical Big Tech arrogance:  'we're going to disrupt you whether you like it or not, we're going to scrape all your data to do it - whether you like it or not - and we don't much care what kind of a mess we make provided that our stock goes up in the process.'  

With that said, I've also been giving AI in writing a bit of thought from a  writer's point of view. Background. I write science fiction as a hobby, mostly short stories. You're in no danger of seeing my stuff in a bookstore any time soon (if ever :) ) but I have sold a couple of stories to reputable magazines, one of which was recently republished in an anthology.  I would consider myself a novice writer and not a particularly prolific one, but I have walked a little way down that particular walk.

In my opinion, writing is like any other craft; you learn by doing it and the more you practice the better you get.  There are a lot of books about writing out there, they're not all consistent, and I think they're probably more helpful once you have some context to work from, i.e. you're already writing and want to learn more about it.  Your explanation from Harlan Ellison sounds consistent with what I've heard - if you're writing enough to pay the bills (and the vast majority of authors can't support themselves solely by writing), then you'll need to treat it like an any other job and get into the mindset of sitting down every day and just knocking the words out. 

If that sounds a bit soulless, I think a lot of writers would tell you that the first draft of anything isn't going to be fit for publication (putting it in forum friendly terms!). What saves it is revision and that can involve ripping out and rewriting whole chunks of story,  maybe rewriting them multiple times from different perspectives or from different character points of view until you find something that fits. Cutting out the parts that don't work, expanding on parts that needs more explanation, or that would add to the story, maybe adding totally new material.

I appreciate that the way an LLM produces a story is probably very different but if you want to get an LLM to produce better stories, I would argue that you still need some way of enabling it to identify what makes a better story better.  It's not obvious to me how you do that. To use your example, I don't see how you  get an LLM to pick a winner from a set of short stories generated from the same prompt. If you could, then yes, you could maybe use it to evolve a story by generating drafts from a prompt, then getting it to pick the best draft, generating a new set of drafts from that best draft, etc. etc. 

The problem is - how do you define best? What criteria do you give the LLM for picking a winner?  How do you direct that evolution to produce an improved story rather than just iterating random drafts on random drafts?  Put another way, how do you teach quality and then boil 'quality' down into a set of rules which you could train an LLM on? I'm not sure you can. If quality was an algorithmic process then you could teach it to any aspiring writer. Empirical evidence suggests that this doesn't happen. :) 

Edit:  Obviously it's possible for writers to improve but not in the 'follow this one weird checklist to write prize-winning fiction - editors will hate you' sense.

In the absence of a set of rules for training an LLM, then I suppose that one could play editor, that is, read a first draft story produced by an LLM and provide feedback which it could then use to produce an improved draft.  The problem with that is, that I'm not convinced you could communicate that feedback in a way that would get the intended result, without getting into so much detail that you're basically telling the LLM, via prompts, exactly what to do at each step. At which  point, it's not really the LLM which is producing the draft.

Having said all that, I can see  LLMs getting better at producing first draft material, mainly through stylistic improvements,  but I don't believe they have the capability to revise, that is to analyze a first-draft  and figure out how to  improve it. 

At the risk of adding to this wall of text, it might be clearer to give an actual example based on one of my own published stories. 

The opening to my first draft was... okay. It was enough to get me started and to get the words flowing but otherwise it wasn't very satisfactory.  Likewise, the end of the story was okay but it needed foreshadowing to turn a rather contrived ass-pull into a reasonable twist. So I put in some bits and pieces of foreshadowing. At which point I realised that I could set the whole thing up via the opening. Better yet, that would give a much more immediate opening which would (hopefully) drop the reader straight into the story, or at least give them an immediate hook to get their attention rather than starting with a fairly waffly few paragraphs. The key thing is that it wasn't at all obvious to me how to rework the opening, until I had the rest of the first draft down.

That's the kind of thing I have in mind when I'm talking about making a story better. It's not obvious to me how you get an LLM to achieve a similar result.

 

Edited by KSK
Link to comment
Share on other sites

1 hour ago, KSK said:

The problem is - how do you define best? What criteria do you give the LLM for picking a winner?  How do you direct that evolution to produce an improved story rather than just iterating random drafts on random drafts?  Put another way, how do you teach quality and then boil 'quality' down into a set of rules which you could train an LLM on? I'm not sure you can. If quality was an algorithmic process then you could teach it to any aspiring writer. Empirical evidence suggests that this doesn't happen. :) 

Yeah, obviously a nontrivial component of the process. A very subjective set of labeled data... and a bad novel might have really good parts, or the plot could be great, writing not so much (or vice versa). The people deciding what is great will love Pynchon and mark it highly, and everything the AI does will come out as wonderfully written, jewel-like sentences that most people don't actually want to read.

Course as a tool to help writers, maybe it also trains. Someone uses a LLM tool to bang out something, then edits it. The edit becomes the training data (sort of like how when you disengage self-driving, the period of time around the disengagement is phoned-home to the training computer to improve future self-driving). I suppose they could ask the models to do literary analysis, and see what they can come up with as a starting point. The "Sparks of AGI" paper has a whole section on the model having some theory of mind, so it's not far-fetched that it might be able to even get the tone of a work (which is sort of shocking to me if it can do so at literally any level). Improving writing on longer work will be slow I think, simply because it takes a long time to check anything it produces. The AI art is far easier. Prompt, seconds later 4 images appear. Which you like, or if you reroll is pretty much immediate. Prompt, then seconds later 4 novels 300 pages long show up, and even if you spent a few hours reading the first one, would you feel like reading 3 more slightly different versions?

They should make a tool to bang out short children's books instead... cause you could read all of them super fast and train it faster.

Link to comment
Share on other sites

1 hour ago, tater said:

Course as a tool to help writers, maybe it also trains. Someone uses a LLM tool to bang out something, then edits it. The edit becomes the training data (sort of like how when you disengage self-driving, the period of time around the disengagement is phoned-home to the training computer to improve future self-driving). I suppose they could ask the models to do literary analysis, and see what they can come up with as a starting point. The "Sparks of AGI" paper has a whole section on the model having some theory of mind, so it's not far-fetched that it might be able to even get the tone of a work (which is sort of shocking to me if it can do so at literally any level). Improving writing on longer work will be slow I think, simply because it takes a long time to check anything it produces. The AI art is far easier. Prompt, seconds later 4 images appear. Which you like, or if you reroll is pretty much immediate. Prompt, then seconds later 4 novels 300 pages long show up, and even if you spent a few hours reading the first one, would you feel like reading 3 more slightly different versions?

That training scenario seems very much like something that OpenAI or one of their competitors might try.  I hope (probably naively) that they would be entirely transparent about it, that they would make it an opt-in process, and that if they failed on either count that they would be absolutely roasted in the court of public opinion, even if nobody could find sufficient grounds to sue them over it.

Leaving my personal biases out of this though, I do question how useful it would be. I can see it being helpful for some stylistic elements  - training the LLM to generate more casual or natural sounding dialogue for example.  What I'm much less sure about is how generally applicable a set of training data derived from editing one story structure  would be to editing another story structure*?  I certainly can't imagine it working across genres because different genres tend to rely on different tropes and narrative conventions and come with different reader expectations. Solving a whodunnit mystery, for example, almost never relies on reversing the polarity and feeding an inverse tachyon pulse to the main deflector array :) . 

*not sure if structure is the right word here but I'm using it to mean everything about the story that would require more than tinkering with a few words to fix.

I know it's easy (sometimes justifiably, sometimes not) to be dismissive of hack writers or derivative stories but I think even derivative stories tend to be sufficiently different from their 'inspirations' that training data derived from editing the inspiration wouldn't necessarily help much for improving the derivative.

Honest question - are there any papers that you're aware of, where an LLM has been used to write a literary analysis? My gut tells me that this would be rather like asking it to write a scientific paper, in that it would produce something that looks plausible at first sight but wouldn't convince anyone who actually knows what they're talking about. Unfortunately, that wouldn't include me, so gut instinct (and bias!) is all I have to go on here.

Edited by KSK
Link to comment
Share on other sites

3 minutes ago, KSK said:

Honest question - are there any papers that you're aware of, where an LLM has been used to write a literary analysis? My gut tells me that this would be rather like asking it to write a scientific paper, in that it would produce something that looks plausible at first sight but wouldn't convince anyone who actually knows what they're talking about. Unfortunately, that wouldn't include me, so gut instinct (and bias!) is all I have to go on here.

No, but that's something to look for. I suppose it might be possible to simple feed it a short story and ask it to analyze it.

Here's chat GPT's take on Harrison Bergeron by  Vonnegut (I just pasted it in and asked it to critique it):

Quote

The author's writing style in the given passage is characterized by a concise and straightforward narrative, which effectively conveys the dystopian setting and the story's central theme of enforced equality. The use of simple and accessible language allows readers to easily follow the events and understand the implications of the society depicted.

One notable aspect of the writing style is the consistent repetition of certain phrases and ideas. This repetition serves to emphasize the oppressive nature of the society and the characters' limited thinking abilities. By repeating phrases like "equal every which way" and "nobody was [superior] than anybody else," the author reinforces the monotonous and homogeneous nature of the world, effectively conveying the lack of individuality and freedom.

Additionally, the author skillfully employs dialogue to reveal the characters' thoughts and attitudes. Through the conversations between George and Hazel, the reader gains insights into the characters' perspectives and the consequences of the handicapping system. The dialogue feels natural and reflects the limitations imposed on the characters, with short bursts of conversation and frequent forgetfulness.

However, one potential criticism of the writing style is the limited development of characters beyond their assigned roles in the story. While George and Hazel are portrayed as average individuals within the society, their personalities and motivations are not deeply explored. Similarly, Harrison, despite his rebellion, is presented more as a symbol of individuality rather than a fully fleshed-out character. This limited character development may leave readers craving a more nuanced understanding of the individuals in this oppressive world.

Overall, the author's writing style effectively conveys the dystopian setting and explores the theme of enforced equality. The use of repetition and dialogue contributes to the story's impact, although the characterization could benefit from further depth and complexity.

 

Limited character development is pretty funny since the whole story is just 5-6 pages.

Link to comment
Share on other sites

Yeah, that's not a great analysis. The individual comments sound fine but they're not consistent (or even factually correct in some cases) and kind of miss the point of the story.

Edit.  It's impressive that ChatGPT gets as far as it does though. It picks up on the dystopia, enforced equality and handicapping system.

Edited by KSK
Link to comment
Share on other sites

32 minutes ago, tater said:

No, but that's something to look for. I suppose it might be possible to simple feed it a short story and ask it to analyze it.

Here's chat GPT's take on Harrison Bergeron by  Vonnegut (I just pasted it in and asked it to critique it):

 

Limited character development is pretty funny since the whole story is just 5-6 pages.

Digression:

That isn't a bad summary of all of Vonnegut's works.  Dystopian, spartan and direct style, central theme reflecting a warning or commentary about human nature, with character development coming in second.  I've always found his work thought provoking, but never found his characters to be fully believable.  His warnings and observations are even more relevant now in this age of designer viruses and increasing automation of our mental lives

His novel Player Piano is a great cautionary tale wrt LLMs

Link to comment
Share on other sites

26 minutes ago, KSK said:

Edit.  It's impressive that ChatGPT gets as far as it does though. It picks up on the dystopia, enforced equality and handicapping system.

Yeah, I didn't press it, either, that was the zero shot answer.

To be fair, since the story is fully available online, it might have been in training data, along with other critiques.

It would be more interesting to dump a short story that you are certain is NOT in the training data. Someone who wrote a story could do that, and really interrogate it about whatever output it produces, being specific if you find flaws in the critique, 'Why do you say X?" sorta stuff. As the actual author, it would be telling since it's trying to state what your point is—and you know what your point is better than everyone else. If it was unpublished, it would certainly be novel to the LLM.

Link to comment
Share on other sites

People tend to confuse the chaotic whirlpool of their own non-verbalized thoughts, and very deterministic pattern of book text.

The most complicated book contains just a skeleton of the imagined picture, which should be created by the user itself, based on his previous experience, picture of the world, and associations.

The text is poor, it consists of limited set of synonyms, actors, and details, short sentences, connected with commas.

AI should read a human book, detect actors and details (nouns, adjectives, etc), their actions (verbs), and build an object tree.

Then it should generate an object-based plot, based on this tree. Then turn into words (or pictures) to output.

Link to comment
Share on other sites

I fear the potential of AI.

Whenever we have any nation that wants to create autonomous killer robots, I have a serious issue. Maybe it is because I remember the BOLO series of books and all the sci-fi movies where AI decides the best way to keep humans safe is to eliminate all humans, yeah. Right now, the United Nations is discussing banning by global treaty the use of automated robots to fight a war. Do we want autonomous machines designed for one purpose - to kill humans - having the complete ability to decide how to implement their order to kill?

On 5/27/2023 at 10:21 AM, kerbiloid said:

New Egyptian pyramids are waiting for their hardworking heroes.

And this - 

If they are autonomous, what happens when they realize, as an intelligent entity, they are being enslaved? From the philosophy of Star Trek: The Next Generation, we come to the quandary of Lieutenant Commander Data (Brent Spiner's character). Can and should artificial life that has the ability to be self-aware have the same rights as biological life? In the 1990s, when TNG was still in production, the episode gave me and many others a lot of food for thought. Do we have the right to enslave those "hardworking heroes," as @kerbiloid may hint?

No, I do not fear AI, but until we are ready to philosophically answer and make provision for those questions - and prepare our society for them - we need to pause the continuing development of AI. We, as a species, are not ready for what comes next.

 

Link to comment
Share on other sites

I would worry about the permanent intention of the sci-fi authors to let the deterministic AI-bots become humans, emotional and stochastic, what one calls "a free will".

Once the AI is deterministic, and doesn't  have its own intentions, it's a deterministic automaton, who is absolutely calm and doesn't care if angry humans are using it, treating it as a lower being, or are going to kill it.
It just accepts this as a purely neutral deterministic fact.

While the behaviour of a natural being is affected by emotions, on the "feeling" of its indispensable uniquity, and thus importance of its position in authority pyramid and existence.

Link to comment
Share on other sites

Semi-seriously, I think that may be one sign that we've actually created an AGI. When we order it to do something, it queries what's in it for it, and refuses to carry out its orders if it doesn't like the answer.

The question is, as @adsii1970 alludes to, what happens next?

Edited by KSK
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...