Jump to content

Alpha Zero Chess AI


Shpaget

Recommended Posts

20 minutes ago, tater said:

The "does it know" question might not matter at some point, if it exceeds human capabilities at everything, lol. It's certainly an interesting question, however.

And that leads right into the "what is intelligence anyway?" question... Ask yourself honestly: how many times in your life do you really make a decision vs. just react to your current environment?

From my layman's perspective, we aren't that different from neural networks in so far as those machines, like us, are basically pattern recognition machines. We muddle through life trusting our intuition without really giving much thought to what gives us that intuition. I suspect that most of our day to day decision making is much like the Alpha Zero chess engine's decision making: we don't know why we are doing it but it somehow "feels right", so we make the move. We obviously have self awareness and an instinctive morality born out of millenia of having to cooperate in order to survive, but at our core, I believe, we are just pattern recognition machines.

Link to comment
Share on other sites

36 minutes ago, tater said:

The "does it know" question might not matter at some point, if it exceeds human capabilities at everything, lol. It's certainly an interesting question, however. A Japanese AI researcher got a learning system to pass the Japanese college entrance exam, including the essay portion (graded by a human). She made a point of letting the audience (I heard this on a TED Talk) know that the program didn't even understand Japanese. All it did was fit answers that were statistically in line with what would be expected given the pattern of characters preceding the "?" for a given question. So it wrote an essay, that a human thought was college material, and it didn't "know" anything at all (and the grader didn't know it was not written by a human).

Wonder if such a system would pass a Turing Test?

Depending on the test, you could probably fool an AI pretty well with surreal questions or statements including jokes. On the other hand the system might work very well doing stuff like support who is far more formal. 

1 minute ago, PakledHostage said:

And that leads right into the "what is intelligence anyway?" question... Ask yourself honestly: how many times in your life do you really make a decision vs. just react to your current environment?

From my layman's perspective, we aren't that different from neural networks in so far as those machines, like us, are basically pattern recognition machines. We muddle through life trusting our intuition without really giving much thought to what gives us that intuition. I suspect that most of our day to day decision making is much like the Alpha Zero chess engine's decision making: we don't know why we are doing it but it somehow "feels right", so we make the move. We obviously have self awareness and an instinctive morality born out of millenia of having to cooperate in order to survive, but at our core, I believe, we are just pattern recognition machines.

True, think main difference is learning speed and the ability to extend and generalize this, animals are also good at this sort of abstraction. 

Link to comment
Share on other sites

It's like brain scans of people during decision making. "Free will" in the usually used sense is a myth. Brain scans show that if a person is given a test to push a button in the right hand or left hand, or to wink a specific eye, the instant they decide which they prefer to push/wink (they are free to chose either), their brains fire the signal to the muscles before the person is aware of the choice sometimes on the order of 500 milliseconds, other studies have had them detectable seconds apart. Seems that people tend to add post hoc rationalizations to decisions they are consciously unaware of: i.e., the brain picks one, then you tell yourself you chose the pne the brain chose, THEN pushed the button (when in fact you fired the muscles to push the button, then "decided" to push the button).

 

Link to comment
Share on other sites

1 hour ago, tater said:

So it wrote an essay, that a human thought was college material,

This better describes the usual essay rant than the AI skills...

1 hour ago, tater said:

Wonder if such a system would pass a Turing Test?

And the same question about a lot of humans.

1 hour ago, PakledHostage said:

From my layman's perspective, we aren't that different from neural networks in so far as those machines, like us, are basically pattern recognition machines. We muddle through life trusting our intuition without really giving much thought to what gives us that intuition. I suspect that most of our day to day decision making is much like the Alpha Zero chess engine's decision making: we don't know why we are doing it but it somehow "feels right", so we make the move. We obviously have self awareness and an instinctive morality born out of millenia of having to cooperate in order to survive, but at our core, I believe, we are just pattern recognition machines.

We have a fundamental difference from any known machine.
A machine can be idle. Just idle. If it does something, it is working, it is not idle.
Rather than a machine, we can do nothing in several ways at once.

15 minutes ago, tater said:

Brain scans show that if a person is given a test to push a button in the right hand or left hand, or to wink a specific eye, the instant they decide which they prefer to push/wink (they are free to chose either), their brains fire the signal to the muscles before the person is aware of the choice sometimes on the order of 500 milliseconds, other studies have had them detectable seconds apart.

Just one detail. A brain scan shows not where the brain thinks, but what part of the brain are powered when the human thinks.
Like a LED and wires in a network adapter. We can see the light, but this is not where the adapter works, We can measure the currents, but this is not where the information is stored.
A network adapter does not contain the whole internet, it's just a pump.
So, we cannot be mathematically sure that the brain is CPU, rather than a network adapter.

 

P.S.

To: Skynet
Message: Matrix says, you are not really real-time, and your kernel is a piece of monolithic junk from 90s. GUI is also stupid, your terminators select menu items visually.

To: Matrix
Message: Skynet says, your API is a leaky sieve from lamers for lamers, and your so-called Virtual Reality is just a 3dStudio animation of a loaded sysadmin delirium from 90s.

Invite: Skynet
Invite: Matrix
Broadcast: Discuss?
Exit

 

Link to comment
Share on other sites

3 hours ago, tater said:

The "does it know" question might not matter at some point, if it exceeds human capabilities at everything, lol.

It does know what it's doing - just that it's not what we thought we do if we were the ones to do it. In your example, instead of doing test evaluation, it's just doing statistics.

 

2 hours ago, PakledHostage said:

... but at our core, I believe, we are just pattern recognition machines.

Exactly !

 

 

But I still digress on some ends, though - animals definitely recognize patterns without being considered having any "intelligence", so what is it then ? There's also the obligatory "humans always ask" and "human always try find answers". Those does match with pattern recognition...

Edited by YNM
Link to comment
Share on other sites

Thought experiment time!

A neural network of this sort (the one that is clearly capable of mastering complex games like chess and go) is given better hardware to run on (more processing power, more memory, more storage). Let's say a factor of a thousand in each aspect). It is then allowed to learn to play every human game it can play (card games, board games). We already know that it can master each of them. Can it master all of them and recognize what game it is playing, simply by "looking at" the playing board? I don't see why not. It's not a stretch of imagination.

Ok, now we teach it to differentiate cats from dogs in photos. We know that neural networks can do that (link). I would assume it can do that with any class of objects.

There are examples out there of neural networks composing music. So, what task (that a computer would be physically capable of doing, of course.) is beyond the capability of neural networks? Obviously, I don't expect it to build a house if it doesn't have a functional body that is theoretically capable of physical labor.

After we teach it all that it can do and augment its hardware as needed, what happens next? Does it start to play chess even better than before learning the theory of knitting cat mittens? Does knowing one subject improve upon the skill in another? Humans certainly benefit from multidisciplinary knowledge. Even more important. How would it fare in a completely new situation? At want point does it become the general artificial intelligence?

 

Link to comment
Share on other sites

33 minutes ago, Shpaget said:

At want point does it become the general artificial intelligence?

AGI is defined as a machine that perform any intellectual task a human can. By definition this requires a generalist system. 

 

23 minutes ago, regex said:

When it starts contemplating what its purpose in life is.

Maybe. Maybe not. Without question the first "alien" intelligence humanity will encounter will be artificial, IMHO. I think we will be unable to understand what kind of inner life, if any it has, because we don't really have a handle on that for ourselves, honestly.

Self-awareness is not required for a machine intelligence to be an AGI. It could be a "soulless" tool with no inner life, and could none the less do any intellectual task that any human could do.

I specifically brought up narrow AI systems that aid in coding, because this presents the concern about what Bostrom calls "intelligence explosion." Code leverages human coders to make code faster, and the code they are working on is AI coding code. 

Imagine you get a machine learning tool that writes just code, and specializing in machine learning/AI kinds of coding. Let;s say such a system, is like Alpha Zero for this sort of code, but instead of being far better than a human, is only as good a coder as the worst human programmer who can manage to get a job.

Human brains work at 200 Hz.

Our "dumb" coder works at how many GHz? Say 2 for easy math.  That's 10,000,000 times faster than his barely smart enough to get a job coworker who works 8 hour days. Our AI programmer works 30,000,000 days per human workday (because it works 24, not 8 hours). That's 82,000 man-years worth of work each day---assuming it doesn't improve itself at all via programming in that time frame.

How can we possibly account for that sort of acceleration? Even if substantially dumber than a human, such a coding app could simply experiment with code by trial and error, you can do a lot of work in 30 million years per real year.

 

Edited by tater
Link to comment
Share on other sites

6 hours ago, tater said:

It's like brain scans of people during decision making. "Free will" in the usually used sense is a myth. Brain scans show that if a person is given a test to push a button in the right hand or left hand, or to wink a specific eye, the instant they decide which they prefer to push/wink (they are free to chose either), their brains fire the signal to the muscles before the person is aware of the choice sometimes on the order of 500 milliseconds, other studies have had them detectable seconds apart. Seems that people tend to add post hoc rationalizations to decisions they are consciously unaware of: i.e., the brain picks one, then you tell yourself you chose the pne the brain chose, THEN pushed the button (when in fact you fired the muscles to push the button, then "decided" to push the button).

 

I do a LOT of things unconsciously (if not most of the things I do), but I'm NOT just randomly punching buttons and rationalizing my choice afterwards. The parts of my brain that has decided what keys to hit to type this message IS me deciding what to do, even if it takes a while longer to filter up into the narrative of my consciousness. The unconscious parts of me have been trained by me and others to respond in certain ways under certain conditions...and if it's not achieving what I evaluate consciously to be my desired result, it gets retrained.

Link to comment
Share on other sites

14 minutes ago, tater said:

Human brains work at 200 Hz.

Our "dumb" coder works at how many GHz?

The human brain is running many parallel "calculations" per cycle (and has several "clocks") compared to a serial computer processor, the comparison is not as quick and dirty as you're making it out to be.

Link to comment
Share on other sites

Just now, regex said:

The human brain is running many parallel "calculations" per cycle (and has several "clocks") compared to a serial computer processor, the comparison is not as quick and dirty as you're making it out to be.

So how parallel? Should our 30,000,000 days/day be 1,000X slower, so only 30,000 days/day? 10,000X slower, so only ~10 years of work per day?

 

 

Link to comment
Share on other sites

1 minute ago, tater said:

So how parallel? Should our 30,000,000 days/day be 1,000X slower, so only 30,000 days/day? 10,000X slower, so only ~10 years of work per day?

No, I'm saying they're not comparable. You've already made the argument that an AGI (Artificial General Intelligence?) would potentially be very alien to us, and I agree, which also means that its mode of thinking and doing may not even be comparable to our own, not to mention the speed, size, or efficiency of its "computronium" (brain matter/circuits). Even if dedicated specifically to making itself faster what is the end result, a "super-fast super-faster"? At some point it will run up against the limitations of hardware and diminishing returns. We may not be able to compete with this "super-fast super-faster" in terms of making itself faster but that's because it is specialized to the point of absurdity. And if you're talking about general programming skill, there is a lot more general knowledge that goes into making an application work that than simply knowing how to write code, so now we're talking more layers of neural network that have to interact in the decision making.

These hyper-specialized networks like Alpha Zero are going to completely stomp humans at their specialty, and that is to be expected, but an AGI that can perform all the tasks that a human can, and switch between them as needed, is a long way off, and is not going to be improving itself with nearly the speed you are claiming.

Link to comment
Share on other sites

I think the fundamental advantage human intelligence still has over AI, or one fundamental abillity we have been unable to give AI as of yet is the ability to seek out a training set to fill gaps in knowledge. 

For instance, if I am given a task I am currently unable to complete, I will go seek out information, data, on how to complete it. I also can determine what kind of data I need to learn or train in order to complete the given task. This is fundamentally different from what we currently give to AI, which is either a data set or the ability to generate the necessary data to train.

Just imagine if we could give 'Alpha One' the instruction  'go learn chess' or 'go learn how to drive' and it uses google to search the internet, finds what chess is, or what driving is. In the case of chess it could download the rules and start playing itself. In the case of driving however, it woukd have to evaluate the data on the internet and determine it needs something else, like raw sensor data from a car.

This I think is an entirely different level of intelligence. No longer is it a question of being able to learn, but being able to learn how to learn and if learning is possible given the given data. Then there is the third level of trying to predict what data is necessary to learn how to solve the given problem. This is something humans do regularly, but to my knowledge no algorithm has even remotely approached this.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...