Jump to content

Overview of all AI, not just LLMs


Recommended Posts

1 hour ago, NFUN said:

This thread has approximately eight hours before it becomes solely a discussion on the Chinese Room problem

 

Good luck

I think many here are aware of the Chinese room problem, but if you have a resolution, do share, please.   Generations of philosophers could finally rest in their graves peacefully.  AI is here whether we can fully define it (or our own intelligence) adequately, or not.  This thread is necessarily going to be forced to deal with the pragmatic aspects of AI and the unknowables will likely remain so

Edited by darthgently
Link to comment
Share on other sites

7 hours ago, KSK said:

It's probably unrealistic to expect anything quite as pithy or insightful as Crick & Watson's "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material" but that's the kind of thing I'm getting at.

I actually agree here, and think something like this is a better Turing test than the Turing test.

I think in terms of useful, "intelligent" tools, we have a few already, and they are rapidly becoming more "intelligent" (in quotes), so it's sort of a semantic issue. I won't even try and guess when/if consciousness is a thing.

Link to comment
Share on other sites

2 hours ago, NFUN said:

This thread has approximately eight hours before it becomes solely a discussion on the Chinese Room problem

You internet entities keep throwing symbols like that ^^^ into my brain and it keeps spitting out translations to "meaning" somehow. It's annoying.

Link to comment
Share on other sites

I've often thought that our brains are largely just pattern recognition machines. The degree of success we have in recognizing those patterns is seen as intelligence.  Find or create a pattern that nobody has seen before?  Smart! AI is good at pattern recognition too, and it is getting better. ...But then the trouble in categorizing AI arises from the edge cases around that "largely just"... 

Edited by PakledHostage
Link to comment
Share on other sites

8 hours ago, NFUN said:

This thread has approximately eight hours before it becomes solely a discussion on the Chinese Room problem

 

Good luck

Nah. Chinese room problem is silly.

No, wait, let me re-phrase that. The Chinese Room does not describe how human minds work. But it *is* a pretty decent analog for how these LLMs work. And that's why LLMs are not what people think/hope/fear that they are.

I'm not saying human minds aren't computational devices. I think they are. But the LLM is not how human minds are built, and it's not a path toward anything we would think of as being actually intelligent. IMO, of course. But it's a pretty educated opinion on this subject.

Edited by mikegarrison
Link to comment
Share on other sites

4 hours ago, PakledHostage said:

I've often thought that our brains are largely just pattern recognition machines.

They are not. Pattern recognition is only one sub-system of the brain.

Human brains are mostly focused on a few things: keeping you alive (breathing, heart, etc.), controlling your body (walking, etc.), language, and vision processing. But they also have a lot of other specific and general capabilities.

Vision processing is much, much more than pattern recognition, which is why robots find it notoriously difficult. Essentially what you are doing is doing a real-time mapping of a 2D image into a 3D model. It's the much-harder task that 3D video games do in reverse, when they take a 3D model and project it into a 2D image. The reverse mapping is basically impossible (not enough data to find a unique solution) unless you already have a bunch of built-in concepts about the 3D world, like object permanence and the idea that objects obscure what is behind them. You also have to know, for instance, that an elephant seen from the side and an elephant seen from the front is the same elephant, even though the 2D patterns look much different.

True language processing is also much more complicated than pattern recognition, and involves grammar like nouns, verbs, modifiers, and the like. Moreover, for it to be useful, you have to be able to use the same (or close enough) meanings as other people, as well as the same grammar.

Link to comment
Share on other sites

I tend to think more advanced systems will use combinations of AI mechanics together (some that might not even exist yet), a few I have read are more in this camp, and I find it a reasonable take. SORA is 2 techniques combined (transformers and diffusion models) for example. I have a feeling more old-school expert systems will get linked in as well at some point since all the LLM use has shown that "zero shot" answers are less reliable than multi-shot questions. SORA is working as a physics model (for the common sense physics we tend to experience)—and it developed this model emergently to be able to make acceptable video content. It includes no physics model, nor a 3D model, but it also needs to do that to move cameras and objects within frames.

1 hour ago, mikegarrison said:

True language processing is also much more complicated than pattern recognition, and involves grammar like nouns, verbs, modifiers, and the like. Moreover, for it to be useful, you have to be able to use the same (or close enough) meanings as other people, as well as the same grammar.

To the first part, is it? You might be correct, I actually don't have a strong intuition about it—but I also don't have a strong idea about how my brain actually works to process language. Word choice, consistent meanings, and grammar are emergent qualities of LLMs, they are not given heuristics for that. In the real world that would be like Tesla FSD 12 stopping at red lights and stop signs without ever having been told to do so. Or that FSD 12 now slows at speed bumps (which I guess it didn't do before).

To be clear, I'm not actually taking an opposite position—I'm in the "I don't know" phase at the moment. The pace of progress is so fast right now, we might not have to wait too long to know one way or the other—and LLMs, GANs, diffusion, etc, might absolutely hit a wall soon as well—then this whole AI rush looks like the 80s (when it looked promising, then died out for a few decades).

Link to comment
Share on other sites

6 hours ago, mikegarrison said:

True language processing is also much more complicated than pattern recognition, and involves grammar like nouns, verbs, modifiers, and the like. Moreover, for it to be useful, you have to be able to use the same (or close enough) meanings as other people, as well as the same grammar.

I'll admit to knowing very little about AI or how our own brains work, but my impression that I mentioned above comes from watching my kids learn as they grew from infancy into childhood and my own reflections on the subject.  But having said that, how does the passage above support your argument? Current AI is basically pattern recognition, and LLMs can do all of what you describe as "true language processing"? One can ask it questions and give it instructions in plain language and it does a reasonable job of responding appropriately. Surely doing that requires contextual understanding of nouns, verbs, modifiers, and the like?

Edit: Thinking about it some more,  even your scenario of the elephant can be interpreted as an example of pattern recognition.  Your brain learns that that combination of shapes and colours is an elephant. It learns how that combination of shapes and colours changes as the elephant is viewed from different angles. It learns object permanence. These are all patterns. Putting them together allows the brain to formulate an expectation or model of the world it is experiencing. Some of these patterns that it learns, like object permanence, arise out of interacting with the physical world, but that interaction still  breeds a recognition of patterns and expectations that future situations will follow the same pattern as was previously encountered. Babies learn object permanence early in their development. They learn that things fall. They learn what elephants and petunias look like. They learn to expect that falling petunias don't think "oh no, not again!"

Edited by PakledHostage
Link to comment
Share on other sites

1 hour ago, PakledHostage said:

I'll admit to knowing very little about AI or how our own brains work, but my impression that I mentioned above comes from watching my kids learn as they grew from infancy into childhood and my own reflections on the subject.  But having said that, how does the passage above support your argument? Current AI is basically pattern recognition, and LLMs can do all of what you describe as "true language processing"? One can ask it questions and give it instructions in plain language and it does a reasonable job of responding appropriately. Surely doing that requires contextual understanding of nouns, verbs, modifiers, and the like?

Edit: Thinking about it some more,  even your scenario of the elephant can be interpreted as an example of pattern recognition.  Your brain learns that that combination of shapes and colours is an elephant. It learns how that combination of shapes and colours changes as the elephant is viewed from different angles. It learns object permanence. These are all patterns. Putting them together allows the brain to formulate an expectation or model of the world it is experiencing. Some of these patterns that it learns, like object permanence, arise out of interacting with the physical world, but that interaction still  breeds a recognition of patterns and expectations that future situations will follow the same pattern as was previously encountered. Babies learn object permanence early in their development. They learn that things fall. They learn what elephants and petunias look like. They learn to expect that falling petunias don't think "oh no, not again!"

It's 30 years old now, but a decent place to start if The Language Instinct, by Pinker.

When I was in college, my gf was working in Pinker's lab. She was doing things like searching through transcripts of kids' recordings, looking for very specific grammar errors. One thing that you find is that, unlike LLMs, kids learn language by figuring out the rules, rather than just associating the words from the usage that they hear. This can be shown by how they will make a certain kind of grammar error that they never hear adults say -- regularizing irregular constructions. A kid has likely never heard an adult say "Joe goed to the store," so it's clearly not the kind of learning that a LLM does where they just regurgitate the things they were trained on. Instead, the kids internalize the rule that you add -ed to the verb, but don't (at first) pick up the irregular nature of the verb "to go". They aren't just using things they have been taught by hearing other people say, because other people don't say it.

Yes, it's a "pattern" that they learn -- add -ed to the end of the verb -- but that implies that have already recognized some words are verbs and some are not, and that different tenses have different suffixes, and so forth. If they were learning like LLMs, they would always use "went" instead of "goed", because all their training data uses "went".

Link to comment
Share on other sites

3 hours ago, mikegarrison said:

It's 30 years old now, but a decent place to start if The Language Instinct, by Pinker.

When I was in college, my gf was working in Pinker's lab. She was doing things like searching through transcripts of kids' recordings, looking for very specific grammar errors. One thing that you find is that, unlike LLMs, kids learn language by figuring out the rules, rather than just associating the words from the usage that they hear. This can be shown by how they will make a certain kind of grammar error that they never hear adults say -- regularizing irregular constructions. A kid has likely never heard an adult say "Joe goed to the store," so it's clearly not the kind of learning that a LLM does where they just regurgitate the things they were trained on. Instead, the kids internalize the rule that you add -ed to the verb, but don't (at first) pick up the irregular nature of the verb "to go". They aren't just using things they have been taught by hearing other people say, because other people don't say it.

Yes, it's a "pattern" that they learn -- add -ed to the end of the verb -- but that implies that have already recognized some words are verbs and some are not, and that different tenses have different suffixes, and so forth. If they were learning like LLMs, they would always use "went" instead of "goed", because all their training data uses "went".

Iirc, a lot of grammar has been shown to have some genetic basis (Chompsky?) with things like subject object verb order being set by environment.  Simplifying greatly here, but I do recall that there is a  genetically provided framework (a lot in the Wernicke's area?) for the categorizies of nouns and verbs.

Link to comment
Share on other sites

7 hours ago, mikegarrison said:

Yes, it's a "pattern" that they learn -- add -ed to the end of the verb -- but that implies that have already recognized some words are verbs and some are not, and that different tenses have different suffixes, and so forth. If they were learning like LLMs, they would always use "went" instead of "goed", because all their training data uses "went".

Good point. That said, I suppose the question is does the pattern recognition need to be identical to humans to be "intelligence?"

Assuming there is nonhuman intelligence somewhere in the cosmos, do they necessarily learn the same way humans do? Is it just a different path to the same thing? Sort of like the aircraft analogy—aircraft certainly fly, just not the same way birds do.

On the "goed" to "went" issue, I wonder how quickly that happens in terms of training data? My kids obviously went through that phase as well (and I certainly hear older kids and adults at large in society who apparently can't manage conjugation), but the amount of "training data" for kids is vanishingly small compared to something like an LLM—they get stuff quite fast. Makes me wonder if really sparse training on an LLM might produce similar sorts of errors, but they skip over it because the training sets are so huge? Never really saw much about them til GPT-3 to be honest (wonder what sorts of mistakes GPT-1 made?).

Link to comment
Share on other sites

You are discussing a nominative language, but it's just a decoration for the original https://en.wikipedia.org/wiki/Active–stative_alignment

The baby language is taught as a set of nouns. The more usual - the biggeris its balloon.
Some nouns don't have a shape, they are animations (when a kid grows, they mutate into verbs).

The more nouns a kid knows, the more often it uses some (animated) nouns as verbs to describe relations between the solid big nouns. They become verbs.

There is no need to distinguish nouns, verbs, etc. There is a need to build a standard set of patterns between the nouns (some of them are verb roots).

That's how it's wenting.

Link to comment
Share on other sites

@tater raises an interesting point with regard to iterations required to learn. Clearly our brains are vastly more efficient at learning  than AI. That's even true of what some would regard as "dumb" animals. My sister went on a safari in Tanzania's Ngorongoro national park and the guides there explained that the jeeps weren't allowed to stop when they saw predators because the prey animals had figured out that the predators are located where the jeeps stop. The predators were having a harder time hunting because the prey detected a pattern in the human behavior. How many iterations did it take for the herds to recognize that pattern? 

Edited by PakledHostage
Spelling
Link to comment
Share on other sites

1 hour ago, PakledHostage said:

How many iterations did it take for the herds to recognize that pattern? 

Probably not long - there are scores of examples of 'prey' relationships where the larger animals are responsive to warning calls by birds or small mammals or reactive to other herd actions.  Being cognizant of the environment and what disparate signals might mean has been a key to survival forever. 

I liked reading what you wrote because it shows the tour guides (and government regulators) are aware of human impacts and trying to be responsible.  But seeing humans has always been a threat for most of these animals, whether we are in a Jeep or not... and if we show up repeatedly right before the lion attacks - that's not much different from 10,000 years of evolution informing them that where humans are, dogs are likely to attack, and bad things happen.  They don't know we're just watching for entertainment; they just know that where one predator acts in a given way (Jeep stops) it likely means there are others about to pounce.

...

My biggest complaint about the AI hype is that it's all based on predictive algorithms; trying to predict the qualitative answer sought by the user - and thus is simply a more efficient tool.  Even the graphic work is just predictive; "Is this what you want?" is the output.

We're a far cry from any of these systems having a desire of their own.

Think about it: elephants are supposedly self-aware.  They have names for each other and recognize themselves in a mirror (among other things).  That intelligence is tied up with their own survival and needs/wants.  Nothing I've ever seen has suggested that these intelligent animals have any desire to learn to play music for their own entertainment, much less design a rifle and start fighting back against the poachers.

Even if our tools can now mimic what our artists can create (AI music, AI art, AI writing) - that is still just a tool spitting out a predictive response based on inputs.  Wake me up when AI starts writing its own music for itself to enjoy.

Link to comment
Share on other sites

The conversational thread here has scattered a bit, so I need to ask: Are we talking about artificial human-like intelligence, or artificial intelligence more generically? Creation of art and how we learn language is unique to us. But that doesn't mean other entities (be they elephants, computers or Pakleds) aren't or can't be intelligent because they don't manifest their intelligence like we do.

And please don't get me wrong... I'm not some Kurzweil fan boy... I expect that, while we've seen impressive progress in the AI field in the last 10 years, the devil will be in the details when it comes to emergence of AGI. There are aspects of current AI that can match our own, and at times I think we over estimate our exceptionalism,  but the devil is still in the details. 

Link to comment
Share on other sites

19 minutes ago, PakledHostage said:

The conversational thread here has scattered a bit, so I need to ask: Are we talking about artificial human-like intelligence, or artificial intelligence more generically? Creation of art and how we learn language is unique to us. But that doesn't mean other entities (be they elephants, computers or Pakleds) aren't or can't be intelligent because they don't manifest their intelligence like we do.

And please don't get me wrong... I'm not some Kurzweil fan boy... I expect that, while we've seen impressive progress in the AI field in the last 10 years, the devil will be in the details when it comes to emergence of AGI. There are aspects of current AI that can match our own, and at times I think we over estimate our exceptionalism,  but the devil is still in the details. 

That question is the kicker.

I think the problem is in the wording of the name and the collective fearmongering that has occurred.  We call it Artificial Intelligence.  And then label a tool that very artfully tells us what we want to hear as "intelligence"... and mistake that for actual intelligence (a word that in itself is totally muddled, especially in English).

Think back to the last decade or so of people talking about people - things like EQ vs IQ (emotional quotient vs intelligence quotient), wisdom, intuition... all the things that make up a 'mind' or a 'person'.  I frankly don't think we are anywhere close to creating a mind. 

We are just creating yet another disruptive technological tool that we cannot yet predict how far it will go towards changing our society.  But to layer on top of this concern the fantasy that AI will both become self-aware and then decide to control us?  (Remember the old saw, the only real way you control a thing is by possessing the ability to destroy the thing)

The real foundational risk is 1984. 

When we (people) become reliant on the tool and the (government/corporation) decides what we can and cannot know.  That's not a new risk.  It is THE risk.

Link to comment
Share on other sites

1 hour ago, JoeSchmuckatelli said:

I think the problem is in the wording of the name and the collective fearmongering that has occurred.  We call it Artificial Intelligence.  And then label a tool that very artfully tells us what we want to hear as "intelligence"... and mistake that for actual intelligence (a word that in itself is totally muddled, especially in English).

It all gets muddled with consciousness, etc, as well. I tend to think that humanity will unambiguously agree that we've reached a milestone with machine intelligence (perhaps a better word than artificial?) when it comes up with novel ideas. That's actually a pretty high bar, because not many humans have actually come up with novel ideas in the sense I mean. Even most "intellectual" people are mixing and matching their own "training data" of education, books and papers read, real work (lab, business, whatever), etc. If GPT-X mish-mashes stuff together and presents an idea, it can be said it was telling us what we wanted to hear. If a person wrote the same essay (as GPT-X in previous example) about that subject, we might say they have a novel POV based on their edu/life/work experience and applaud it.

At least real science claims made by AI (at some point when/if it makes a novel claim) can be tested. They are necessarily falsifiable. Same with engineering, maybe an AI can come up with a novel engineering solution—someone can build it, and see if it actually works. Art—written, graphic, musical, or cinematic—can/could be made by "AI" at some point and the only judge of success will be people thinking it's good... worth reading, listening to, or watching. A lower bar considering what people like from media made by humans ;)

Edited by tater
Link to comment
Share on other sites

4 hours ago, PakledHostage said:

@tater raises an interesting point with regard to iterations required to learn. Clearly our brains are vastly more efficient at learning  than AI. That's even true of what some would regard as "dumb" animals. My sister went on a safari in Tanzania's Ngorongoro national park and the guides there explained that the jeeps weren't allowed to stop when they saw predators because the prey animals had figured out that the predators are located where the jeeps stop. The predators were having a harder time hunting because the prey detected a pattern in the human behavior. How many iterations did it take for the herds to recognize that pattern? 

Few times, then I was a kid we had an dog who always ran away. I found I could trap him with food and close the door. It worked with the dogs kennel once. The garage twice. 
Pretty sure animals know that humans are not very dangerous outside of hunting season and not dangerous in suburban settings where hunting is illegal. 

 

Link to comment
Share on other sites

14 hours ago, magnemoe said:

Pretty sure animals know that humans are not very dangerous outside of hunting season and not dangerous in suburban settings where hunting is illegal.

If you are a herbivore, a lack of fear tends to be dangerous. But running away from harmless things is also dangerous (unnecessary caloric output, plus the chance that you alert or even blunder into something actually dangerous). At some point many animals learn to be cautiously trusting around other animals that don't seem hostile.

Link to comment
Share on other sites

20 hours ago, tater said:

I tend to think that humanity will unambiguously agree that we've reached a milestone with machine intelligence (perhaps a better word than artificial?) when it comes up with novel ideas.

Right, this is pretty much the sticking point when it comes to all forms of machine intelligence - even in scientific fields, with purpose-built deep learning models for particular data sets, it seems that it's pretty tricky to cross that hurdle into finding new correlations that the average (or even expert) human wouldn't be able to find. It's great at reproducing some of the more nebulous correlations that humans can pick out, but catching the golden goose seems to require something different...

20 hours ago, tater said:

Art—written, graphic, musical, or cinematic—can/could be made by "AI" at some point and the only judge of success will be people thinking it's good...

...which is currently the issue with AI-generated art. Theoretically it's only able to work within the parameter space given by its training set, and in practice it seems to only be able to work within a narrower 'average' portion of that parameter space where there's sufficient data to work with. I assume that's why it tends to produce either very 'safe' results that are arguably lacking that cohesive spark, or just weird nonsense that may as well be random, no matter what medium its working in.

On 2/18/2024 at 4:15 PM, tater said:

Makes me wonder if really sparse training on an LLM might produce similar sorts of errors, but they skip over it because the training sets are so huge?

I had a friend actually who trained an older LLM (GPT-2?) on some old chat messages for fun before things really blew up... and results were a bit mixed. Whenever it was stuck with a small sample, it would either just regurgitate a message from the training data verbitim (or close enough) or spit out gibberish words that were only sometimes pronounceable. I was told at the time that this was probably due to a combination of the model actually generating character by character rather than word by word, and the fact that some of the words in the training data actually were made-up nonsense, because that's how people speak online sometimes.

Edited by GluttonyReaper
Link to comment
Share on other sites

No comment.  Or too many to post?  Not sure how comfortable I am with how comfortable this approach to warfare decision making seems to be with some folks.  The term "Kneejerk Armageddon" pops into my head

 

Edited by darthgently
Link to comment
Share on other sites

I’m not sure that table is so helpful.  Take calculator software for example - I’m willing to bet that it could be used to perform tasks up to at least Level 3 on that scale. Likewise, I reckon that  image generators could be used to spit out work at Levels 1-3, possibly 4,  depending what you prompt them with.

I’m also slightly alarmed that 50% of skilled adults are less competent than Siri within its range of tasks.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...