Jump to content

Overview of all AI, not just LLMs


Recommended Posts

55 minutes ago, KSK said:

I’m not sure that table is so helpful.  Take calculator software for example - I’m willing to bet that it could be used to perform tasks up to at least Level 3 on that scale. Likewise, I reckon that  image generators could be used to spit out work at Levels 1-3, possibly 4,  depending what you prompt them with.

I’m also slightly alarmed that 50% of skilled adults are less competent than Siri within its range of tasks.

That was from the DeepMind paper of the same name at the top of the image, Levels of AGI (Nov 2023).

https://arxiv.org/html/2311.02462v2

Here's the table caption:

Table 1: A leveled, matrixed approach toward classifying systems on the path to AGI based on depth (performance) and breadth (generality) of capabilities. Example systems in each cell are approximations based on current descriptions in the literature or experiences interacting with deployed systems. Unambiguous classification of AI systems will require a standardized benchmark of tasks, as we discuss in the Testing for AGI section. Note that general systems that broadly perform at a level N may be able to perform a narrow subset of tasks at higher levels. The "Competent AGI" level, which has not been achieved by any public systems at the time of writing, best corresponds to many prior conceptions of AGI, and may precipitate rapid social change once achieved.

They also have a set of levels for autonomy vs required human interaction.

Regarding Siri... I have a feeling that nearly all of us on this forum at all live in a bubble. Think about the people you most commonly interact within the real world. What percentile of cognitive ability do you think they are? I'd wager people interested in a game about spaceflight and orbital mechanics are probably smarter than average. I hear about people selected at random from the population from my wife (well, not quite random, they have to be in a situation where they need a surgeon—very few trauma, or anything where it would select for being a dope, though). She sees several thousand people a year (~20/day?). I'll say something at dinner about something X, that I think people should do ideally, and she'll give me that "Do you live under a rock?" look (often verbalized in exactly those words, lol), then tell me people are too dumb to pick X. As lousy as Siri is, I'd not be surprised at this point if it was better than 50% of people—look at the % of kids within different grade levels who perform at grade level.

2019179-1.png

(National Center for Educational Statistics)

Level 2 is apparently an "8th grade" reading level, so 52.6% are not terribly literate. Honestly "reading levels" is pretty odd, I always considered reading a binary skill, you can't read, then you can—they are functionally sorting by cognitive ability I think, not "reading level," as the levels talk about reading and understand more and more complex ideas. Reading itself doesn't change in difficulty, words are words, and if you don't recognize one, you look it up (at any "reading level"). So yeah, it would not surprise me if Siri is better than ≥50% in whatever narrow task they were looking at (grammar?).

Edited by tater
Link to comment
Share on other sites

I'm going to have to read that Deepmind paper because I'm genuinely curious to know how they're defining AGI. Whether or not I agree with that definition is another matter, but I think it would be useful to know what it is for discussions like this.

Link to comment
Share on other sites

I can easily see smart-speaker systems besting humans at a range of tasks.

Setting up alarms and calendar reminders would be a fairly low bar, but remember the tropes about people not able to set the clock on their VCR.

Other tasks, such as adding an item to a specific list in a proprietary system(such as whatever app Google is currently using to story my shopping list) could even stymie relatively bright people if it has a poor UI.

Adding appointments to other peoples calendars could be particularly difficult depending on your security set-up.

My wife has a masters and regularly uses our smart-speakers as a calculator.(if I am in the room, I often supply the answer first though)

Link to comment
Share on other sites

3 hours ago, StrandedonEarth said:

Ah, but, what percentage would have to look up a given word, vs how many can infer the meaning (or have a pretty good assumption) from the context?

Doesn't matter to me, looking up words is how one learns new words. I never "got" reading levels when the kids were little. Mine learned to read quickly, at home, with no real work on our part at all. Story time, and picture/alphabet  books was enough. They were reading proper novels quickly. Started reading Harry Potter to my son at bedtime (daughter 2 years older had already read maybe all of them). He kept falling asleep (he could already read, this was preschool/kindergarten), so I read the same X pages every night. At some point he just started reading it himself when he was awake, such that by Halloween (by then in kindergarten), he had to be Harry as his costume.

Edited by tater
Link to comment
Share on other sites

1 hour ago, Terwin said:

I can easily see smart-speaker systems besting humans at a range of tasks.

Setting up alarms and calendar reminders would be a fairly low bar, but remember the tropes about people not able to set the clock on their VCR.

Other tasks, such as adding an item to a specific list in a proprietary system(such as whatever app Google is currently using to story my shopping list) could even stymie relatively bright people if it has a poor UI.

Adding appointments to other peoples calendars could be particularly difficult depending on your security set-up.

My wife has a masters and regularly uses our smart-speakers as a calculator.(if I am in the room, I often supply the answer first though)

Problem with setting the clock on VCR or other stuff tend to be horrible user interfaces as you say, I hate two button configuration. 
 

Link to comment
Share on other sites

Posted (edited)

I predicted that AI would lead to attempts to avoid human accountability and blaming the AI, and so it begins.  Thankfully the court was amused, but not convinced, in this case.  

 

Edited by darthgently
Link to comment
Share on other sites

Posted (edited)

There is only one sane course.  Truth and trust need to be valuable enough to us to make it a severe taboo to present fiction as real or the real as fiction.   It cannot simply be brushed off.  It is serious. 

For example, a news program that violated this taboo would not just be frowned upon, but ignored from that point onward, possibly facing lawsuits if standing and merits are there.  Abusers perhaps seen as malicious hackers of other's brains, not merely misguidedly entertaining. 

Fiction is fantastic (pun somewhat intended) and a core part of civilization, but it needs to be explicitly presented as fiction, not truth. 

 

Edited by darthgently
Link to comment
Share on other sites

17 hours ago, darthgently said:

For example, a news program that violated this taboo would not just be frowned upon, but ignored from that point onward, possibly facing lawsuits if standing and merits are there.  Abusers perhaps seen as malicious hackers of other's brains, not merely misguidedly entertaining.

Okay, but what happens if AI generated content was planted somewhere inconspicuously, and repeatedly put in front of gullible journalists. Then I plant AI generated emails and other communications on people's computers to make it look like the news organization deliberately portrayed AI content as real, in order to frame them.

Or what if those events actually happened but sinisterly (the newsmen really did deliberately portray AI as real), but I used AI to generate evidence making my news organization look innocent when it is in fact not?

Or, or what if I told everyone I was interested in the truth and was following standards to protect against AI, but in reality I was secretly churning out AI generated articles with a sinister motive behind them? And the people who noticed this and realized exposed what was happening, but I, who am in power, just wrote off their evidence as AI generated?

I think truth is already done for. I think it was inevitable, because there is some discourse on other websites I've seen and the difference between what people think happened is like night and day.

We can't protect truth if we barely even agreed on it before AI generated content became a thing.

Link to comment
Share on other sites

1 hour ago, SunlitZelkova said:

Okay, but what happens if AI generated content was planted somewhere inconspicuously, and repeatedly put in front of gullible journalists. Then I plant AI generated emails and other communications on people's computers to make it look like the news organization deliberately portrayed AI content as real, in order to frame them.

Or what if those events actually happened but sinisterly (the newsmen really did deliberately portray AI as real), but I used AI to generate evidence making my news organization look innocent when it is in fact not?

Or, or what if I told everyone I was interested in the truth and was following standards to protect against AI, but in reality I was secretly churning out AI generated articles with a sinister motive behind them? And the people who noticed this and realized exposed what was happening, but I, who am in power, just wrote off their evidence as AI generated?

I think truth is already done for. I think it was inevitable, because there is some discourse on other websites I've seen and the difference between what people think happened is like night and day.

We can't protect truth if we barely even agreed on it before AI generated content became a thing.

Or it happens by accident, its not uncommon for joke articles to be taken seriously, Like Chinese media using stuff from the Onion not knowing its an parody site as an example. 
For video its again much harder as we think of video as real even if clips from games has been shown as real combat. 

But this is not really new, staging stuff for camera is old, that is changing is that now everybody can do it. 

Edited by magnemoe
Link to comment
Share on other sites

2 hours ago, SunlitZelkova said:

Okay, but what happens if AI generated content was planted somewhere inconspicuously, and repeatedly put in front of gullible journalists. Then I plant AI generated emails and other communications on people's computers to make it look like the news organization deliberately portrayed AI content as real, in order to frame them.

Or what if those events actually happened but sinisterly (the newsmen really did deliberately portray AI as real), but I used AI to generate evidence making my news organization look innocent when it is in fact not?

Or, or what if I told everyone I was interested in the truth and was following standards to protect against AI, but in reality I was secretly churning out AI generated articles with a sinister motive behind them? And the people who noticed this and realized exposed what was happening, but I, who am in power, just wrote off their evidence as AI generated?

I think truth is already done for. I think it was inevitable, because there is some discourse on other websites I've seen and the difference between what people think happened is like night and day.

We can't protect truth if we barely even agreed on it before AI generated content became a thing.

Whatever chaos lies ahead, it will be those people and cultures who cherish reality based truth and authentically earn each other's trust via being truthful that will come out the other side the least damaged, if at all.  This may be the great filter that other sentient species out there failed to pass.  I don't think there is a technical solution, though many will try only to achieve Kafkaesque results.  There is really only a character based solution

Link to comment
Share on other sites

5 hours ago, darthgently said:

 I don't think there is a technical solution, though many will try only to achieve Kafkaesque results.  There is really only a character based solution

Sort of like the Spanish inquisition?

If you disagree with our morals we 'put you to the question '?

 

How do you have a character based solution to self-published phone recordings of a newsworthy event when the citizen journalist has no history, just luck?

Also, how do you identity which of the conflicting accounts should be considered truthful or criminal when they primarily differ in definition and framing?

How about the same source video cut and framed by two different providers, giving widely different understanding of the situation based on framing?

 

We know that governments will eagerly abuse such authority, and any other organization will likely start out corrupted and favor one set of views over others 

 

Multiple independent organizations gives us the current splintered view of 'truth' based on your source.

I see lots of issues, but not a lot of solutions, especially considering our current status and lack of agreement on things as simple and basic as gender.

Link to comment
Share on other sites

Posted (edited)
3 hours ago, Terwin said:

Sort of like the Spanish inquisition?

If you disagree with our morals we 'put you to the question '?

 

How do you have a character based solution to self-published phone recordings of a newsworthy event when the citizen journalist has no history, just luck?

Also, how do you identity which of the conflicting accounts should be considered truthful or criminal when they primarily differ in definition and framing?

How about the same source video cut and framed by two different providers, giving widely different understanding of the situation based on framing?

 

We know that governments will eagerly abuse such authority, and any other organization will likely start out corrupted and favor one set of views over others 

 

Multiple independent organizations gives us the current splintered view of 'truth' based on your source.

I see lots of issues, but not a lot of solutions, especially considering our current status and lack of agreement on things as simple and basic as gender.

Pretty much what I encapsulated in the word "chaos".  

No inquisition.  Just individuals building trust with each other the old fashioned way I guess.  After awhile you have some sources more trusted than others

Edited by darthgently
Link to comment
Share on other sites

On 3/7/2024 at 6:21 PM, Terwin said:

Sort of like the Spanish inquisition?

If you disagree with our morals we 'put you to the question '?

 

How do you have a character based solution to self-published phone recordings of a newsworthy event when the citizen journalist has no history, just luck?

Also, how do you identity which of the conflicting accounts should be considered truthful or criminal when they primarily differ in definition and framing?

How about the same source video cut and framed by two different providers, giving widely different understanding of the situation based on framing?

 

We know that governments will eagerly abuse such authority, and any other organization will likely start out corrupted and favor one set of views over others 

 

Multiple independent organizations gives us the current splintered view of 'truth' based on your source.

I see lots of issues, but not a lot of solutions, especially considering our current status and lack of agreement on things as simple and basic as gender.

This, note that video could be faked long long ago if you had resources but this was expensive  Better real events like spontaneous protests where everybody somehow had mass produced signs then leaving chartered busses is old. 

And you have more low level weird stuff like people rescuing animals they put in peril for clicks. they tend to go viral before pointing out. 
Yes you can get weird stuff like birds falling down a chimney and end up in your wood stove. 
 

Link to comment
Share on other sites

Google Germini AI has lots of issues, one hilarious one is that it will not show C++ to underage kids as its not safe.
For none programmers C++ does not automatically release allocated memory and database connections used by an function on exiting it unlike more modern and higher level programming languages. 
That is the unsafe part :) 

 

https://www.youtube.com/watch?v=r2npdV6tX1g

 

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...