Jump to content

Overview of all AI, not just LLMs


Recommended Posts

Of course the elephant in the room is the massive differential in energy requirements between silicon and wetware brains in a world woefully lacking the electrical infrastructure to cover the viral plan to electrify all transportation, all HVAC, all tools and appliances,  and how alarmingly humongous the projected electrical budget for future silicon compute is also.  

This could be a great filter for terrestrial life if we attempt all of this at all costs by gov fiat and/or corporate flexing (or a dangerous partnership between the two) without first having achieved post-scarcity via accessing resources beyond earth and without having first durably planted (modified?) terrestrial life elsewhere beyond earth

Link to comment
Share on other sites

19 minutes ago, Nuke said:

this guy has one of the best logos on youtube. 

It is trendy appearing, I'll grant that.  If you are interested in the details of the compute industry with a focus on hardware, and "cats on an island", techtechpotato is a great follow, regardless of logo.  The "cookie" in the logo is apparently a shiny, prismatic silicon wafer I'd guess.  Hopefully not a CDROM, lol

Link to comment
Share on other sites

For context with regards to the projected costs of silicon compute, the human brain runs around 20 watts.  The hypothetical  wattage of human equivalent silicon compute is likely going to be many orders of magnitude beyond that.  We should be focusing on real education of real people if we want to solve problems.  Silicon compute is an expensive sidecar to the civilizational/societal process; useful, but very expensive.  To be judiciously applied in specific use cases.  We can't afford to replace people en masse with silicon if only from an energy standpoint.  We don't have, nor can afford, the infrastructure for it.  

Link to comment
Share on other sites

To be fair the natural intelligence does require a support system that requires more than just the 20 watts. For the average USAian it comes to just less than 10kW.

Link to comment
Share on other sites

Posted (edited)
57 minutes ago, tomf said:

To be fair the natural intelligence does require a support system that requires more than just the 20 watts. For the average USAian it comes to just less than 10kW.

Yes, but that support is for far more than merely compute.  It is what it requires to exist as an evolved intelligence on the only life supporting world we know of in a vast universe.  Existence matters!

Edited by darthgently
Link to comment
Share on other sites

  • 1 month later...
On 1/7/2024 at 5:49 PM, darthgently said:

For context with regards to the projected costs of silicon compute, the human brain runs around 20 watts.

It's a human network adaptor LED.

The real data storage is in cloud.

On 1/7/2024 at 8:46 PM, tomf said:

To be fair the natural intelligence does require a support system that requires more than just the 20 watts. For the average USAian it comes to just less than 10kW.

A modded case with bells and whistles for that adaptor.

Edited by kerbiloid
Link to comment
Share on other sites

https://openai.com/research/video-generation-models-as-world-simulators

A few quotes:

Quote

Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.

 

Importantly, Sora is a diffusion transformer

 

We find that video models exhibit a number of interesting emergent capabilities when trained at scale. These capabilities enable Sora to simulate some aspects of people, animals and environments from the physical world. These properties emerge without any explicit inductive biases for 3D, objects, etc.—they are purely phenomena of scale.

 

We believe the capabilities Sora has today demonstrate that continued scaling of video models is a promising path towards the development of capable simulators of the physical and digital world, and the objects, animals and people that live within them.

That last bit is important not for making videos, but for AGI. We all have an internalized real world model, which informs "common sense." This might be that for computers.

Edited by tater
Link to comment
Share on other sites

2 hours ago, tater said:

https://openai.com/research/video-generation-models-as-world-simulators

A few quotes:

That last bit is important not for making videos, but for AGI. We all have an internalized real world model, which informs "common sense." This might be that for computers.

Would an AI for, say,  self-driving, if only trained on data from vehicle POV cameras and inertial sensors and such, guess that the wider world was flat (ignoring hills and such) and not an oblate spheroid?   

Or would its gyros be sensitive enough to detect the curvature over a longer drive?   If yes, would it discount the curvature as unimportant and pragmatically embrace flat earthism? 

I'm wondering to what degree human flat earthers may just be going with the flow of their wetware neural nets in a natural way, but then backfill with bad symbolic "logic" to support the more fundamental neural net conclusion.  This may seem obvious, but I think it important to recognize that some silly conclusions may make some practical sense in a limited domain.  And the danger of flat earth, or analogous in other domains, exists for AI.  And we don't want AI to be like that.  Do we?

Edited by darthgently
Link to comment
Share on other sites

3 hours ago, darthgently said:

Would an AI for, say,  self-driving, if only trained on data from vehicle POV cameras and inertial sensors and such, guess that the wider world was flat (ignoring hills and such) and not an oblate spheroid?   

And would it conclude that red lights mean stop, green lights mean go, and yellow lights mean go really fast?

Link to comment
Share on other sites

IMO, these LLM that people have gotten all excited about seem to be excellent Wernicke's aphasia simulators. They are really good at sounding fluent but conveying no real meaning.

https://garymarcus.substack.com/p/statistics-versus-understanding-the

In that article, for instance, you see lots of pictures that AI generated when asked for an image of a person writing with their left hand. They all show right-handed people -- because the AI doesn't actually know what left and right is, and the images that they have been trained on overwhelmingly show people writing with their right hands.

Link to comment
Share on other sites

7 minutes ago, mikegarrison said:

IMO, these LLM that people have gotten all excited about seem to be excellent Wernicke's aphasia simulators. They are really good at sounding fluent but conveying no real meaning.

https://garymarcus.substack.com/p/statistics-versus-understanding-the

In that article, for instance, you see lots of pictures that AI generated when asked for an image of a person writing with their left hand. They all show right-handed people -- because the AI doesn't actually know what left and right is, and the images that they have been trained on overwhelmingly show people writing with their right hands.

Agreed.  It is just automated knee jerk mostly.    I read something the other day about efforts to layer over a more old school symbolic logical AI approach on top of the neutral approach which seemed promising.  If I can dig up the link I'll post

Link to comment
Share on other sites

1 hour ago, mikegarrison said:

IMO, these LLM that people have gotten all excited about seem to be excellent Wernicke's aphasia simulators. They are really good at sounding fluent but conveying no real meaning.

https://garymarcus.substack.com/p/statistics-versus-understanding-the

In that article, for instance, you see lots of pictures that AI generated when asked for an image of a person writing with their left hand. They all show right-handed people -- because the AI doesn't actually know what left and right is, and the images that they have been trained on overwhelmingly show people writing with their right hands.

Hard to believe a Feb 2024 article used GPT-3, not GPT-4. The latter is far more capable. Also, the limited tokens really matters. With more tokens you can take the model—pretrained—and teach it. It gives a wrong answer, and you not just correct it, but show it HOW to correct it—like a kid asking math homework questions. You ask the kid to show their work, then maybe you ask if they checked the signs—then they notice they added not subtracted moving a variable from one to the other side of the expression. You don't tell them, you lead them. Models do this now with multi-shot questioning, and do far better.

Yes, it is finding the next word, just sounding fluent—but they are none the less also more than that. They have demonstrated emergent capabilities. The theory of mind examples in that long paper (microsoft people? I linked it in another thread) are telling. That was not "finding the next word" via statistics, it correctly explains what and WHY the different characters in the scenario think what it says they think.

At a certain level faking intelligence IS intelligence. As has happened throughout the quest for AI, the goalposts will shift—often rightfully. Back in the day chess was said to require intelligence. Humans fell, the bar was moved. Go is much harder than chess, surely it requires intelligence... nah, not enough. Chatbots could beat a Turing test right now (depending on the interlocutor), nah, not enough, bar moved.

Maybe coming up with novel math or scientific ideas will be enough? I have no idea, but I think it's far closer than it had been.

Link to comment
Share on other sites

1 hour ago, tater said:

Hard to believe a Feb 2024 article used GPT-3, not GPT-4. The latter is far more capable. Also, the limited tokens really matters. With more tokens you can take the model—pretrained—and teach it. It gives a wrong answer, and you not just correct it, but show it HOW to correct it—like a kid asking math homework questions. You ask the kid to show their work, then maybe you ask if they checked the signs—then they notice they added not subtracted moving a variable from one to the other side of the expression. You don't tell them, you lead them. Models do this now with multi-shot questioning, and do far better.

Yes, it is finding the next word, just sounding fluent—but they are none the less also more than that. They have demonstrated emergent capabilities. The theory of mind examples in that long paper (microsoft people? I linked it in another thread) are telling. That was not "finding the next word" via statistics, it correctly explains what and WHY the different characters in the scenario think what it says they think.

At a certain level faking intelligence IS intelligence. As has happened throughout the quest for AI, the goalposts will shift—often rightfully. Back in the day chess was said to require intelligence. Humans fell, the bar was moved. Go is much harder than chess, surely it requires intelligence... nah, not enough. Chatbots could beat a Turing test right now (depending on the interlocutor), nah, not enough, bar moved.

Maybe coming up with novel math or scientific ideas will be enough? I have no idea, but I think it's far closer than it had been.

The point is that the whole method is fundamentally flawed. Not only is faking intelligence not intelligence, but it's not even close to intelligence. It's like if you studied an entire dictionary and learned exactly how every word is related to every other word, but still had no clue that any of them actually refer to a real world outside the dictionary.

They don't exhibit "emergent capabilities". We *see* emergent capabilities in them, just like we see patterns in tea leaves and shapes in clouds and the face of Jesus on a tortilla. We interpret their babbling as meaningful, but the "emergent capabilities" being demonstrated are all on our side of the fence.

Edited by mikegarrison
Link to comment
Share on other sites

3 hours ago, tater said:

Maybe coming up with novel math or scientific ideas will be enough? I have no idea, but I think it's far closer than it had been.

They've got a way to go yet though. Forget the title of the article - IMO that particular picture is sort of OK in an exaggerated for clarity way.  The rest of the diagrams though - and their captions - are just typical LLM mashups. Superficially convincing in that they look sort of like the real thing, but actually meaningless.

Link to comment
Share on other sites

1 hour ago, mikegarrison said:

The point is that the whole method is fundamentally flawed. Not only is faking intelligence not intelligence, but it's not even close to intelligence. It's like if you studied an entire dictionary and learned exactly how every word is related to every other word, but still had no clue that any of them actually refer to a real world outside the dictionary.

They don't exhibit "emergent capabilities". We *see* emergent capabilities in them, just like we see patterns in tea leaves and shapes in clouds and the face of Jesus on a tortilla. We interpret their babbling as meaningful, but the "emergent capabilities" being demonstrated are all on our side of the fence.

I disagree. If you can string words together, you can string words together. Intelligence != consciousness. That's the point of a Turing test. The machine can know or "understand" nothing, but if you can't tell it from a human, blinded, what difference does it make? If you can ask it questions, and it can answer as well as any human—how is it different from the standpoint of intelligence?

The theory of mind questions were in fact emergent, and unexpected. Fed scenarios using garbage words as the subject (so said words existed in no training data), the scenario about the one guy losing his FISBIT (or whatever the word was) was described accurately by GPT-4, as well as possible internal motivations of the humans in the scenario, and what they might be thinking to themselves. The model above (video from text), and the related paper says:

Quote

We find that video models exhibit a number of interesting emergent capabilities when trained at scale. These capabilities enable Sora to simulate some aspects of people, animals and environments from the physical world. These properties emerge without any explicit inductive biases for 3D, objects, etc.—they are purely phenomena of scale.

This is not dissimilar from the "nothing but nets" Tesla FSD 12 trained entirely on driving data from other teslas. It was never told what a stop sign is, yet it stops at them. It figured out to recognize and slow for speed bumps. These are also emergent behaviors as they are not innate properties of the system (though perhaps less profound than describing state of mind in a human).

A lot of this depends on the definition of intelligence, honestly. If that includes "consciousness" (ill-defined even for humans) then yeah, that's going to be hard to get to, or even demonstrate. If it is "the ability to acquire and apply knowledge and skills." then I think "faking it" is not meaningfully different than being it. How does a pathologist read a biopsy slide? He's trained on images of different pathology, and looks for patterns that match what he recognizes as abnormal, and categorizes them. AI systems can already do this—and when they tested it, they found that the AI system had discovered a new tumor marker pattern (the data had been subsequently verified, and had some examples where they knew cancer, but it was not thought to be visible).

Link to comment
Share on other sites

4 minutes ago, tater said:

A lot of this depends on the definition of intelligence, honestly. If that includes "consciousness" (ill-defined even for humans) then yeah, that's going to be hard to get to, or even demonstrate. If it is "the ability to acquire and apply knowledge and skills." then I think "faking it" is not meaningfully different than being it. How does a pathologist read a biopsy slide? He's trained on images of different pathology, and looks for patterns that match what he recognizes as abnormal, and categorizes them. AI systems can already do this—and when they tested it, they found that the AI system had discovered a new tumor marker pattern (the data had been subsequently verified, and had some examples where they knew cancer, but it was not thought to be visible).

Wouldn't it be more correct to say that the AI had discovered a new correlation between a set of images? I doubt it had any concept of what a tumour marker was, what a biopsy slide was, or the implications of discovering one on the other.

Link to comment
Share on other sites

8 hours ago, KSK said:

They've got a way to go yet though. Forget the title of the article - IMO that particular picture is sort of OK in an exaggerated for clarity way.  The rest of the diagrams though - and their captions - are just typical LLM mashups. Superficially convincing in that they look sort of like the real thing, but actually meaningless.

I don't disagree, but the rate of improvement is nonetheless striking. I think that building "common sense" in the form of a world-model is probably required for systems to have meaningful general intelligence in the future. We have that from existing on the world, and seeing certain real world cause an effect since being toddlers. Push the spoon, and it falls off the high chair tray, then the grownups pick it up... fun! Do it again! Hence me thinking that embodiment might be required. That said, the ability to build internally consistent videos, with common sense physics within is a sort of world-model, so it's a start.

8 hours ago, KSK said:

Wouldn't it be more correct to say that the AI had discovered a new correlation between a set of images? I doubt it had any concept of what a tumour marker was, what a biopsy slide was, or the implications of discovering one on the other.

Fair enough. If a model in a year comes up with something novel analyzing some dataset, and can describe it... then what, we say, it's just an LLM writing plausible English to describe some data it observed, it doesn't "know" anything. Maybe some people will never recognize it as more than a statistical model.

Edited by tater
Link to comment
Share on other sites

23 minutes ago, tater said:

Fair enough. If a model in a year comes up with something novel analyzing some dataset, and can describe it... then what, we say, it's just an LLM writing plausibel English to describe some data it observed, it doesn't "know" anything. Maybe some people will never recognize it as more than a statistical model.

Probably. As you said, the goalposts are always moving.

If somebody came up with a model that moved beyond mere description and basically wrote an internally consistent first draft paper without being led through the process by its metaphorical hand - then I think at that point I'd agree that the question of whether the model was actually intelligent, or just faking it, was irrelevant.

By first draft paper, I'm meaning something that sets the background for the new dataset in terms of what's been observed before, describes the new analysis, and then presents some sort of meaningful conclusion from that analysis. Is it consistent with previous results? Does it invalidate previous results. Does it have any wider repercussions? Does it suggest any new avenues of research?

It's probably unrealistic to expect anything quite as pithy or insightful as Crick & Watson's "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material" but that's the kind of thing I'm getting at.

Edited by KSK
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...