TL;DR: o1 loses the same way in tic tac toe repeatedly.
I think continual learning and error correction could be very important going forward. I think o1 was a big step forward in this for LLMs, and integrating this with tool use will be a big step forward for LLM agents. However...
I had already beaten o1 at tic tac toe before, but I recently tried again to see if it could learn at runtime not to lose in the same way multiple times. It couldn’t. I was able to play the same strategy over and over again in the same chat history and win every time. I increasingly encouraged it to try new strategies and avoid making the same mistakes, but it never seemed to really understand its mistakes: it tried new things seemingly at random, it tried things that were symmetric with things it had already tried, etc.
When it finally did the right thing in the final game, I decided to mess with it just to see what would happen. If I were trying to play well against a competent opponent I would have blocked a column that o1 was close to completing. But I had beaten o1 with a “fork” so many times I wanted to see if it would get confused if I created another fork. And it did get confused. It conceded the game, even though it was one move away from winning.
Similar story for Claude 3.5 Sonnet, though I spent a little less time on that one.
This isn’t necessarily overwhelming evidence of anything, but it might genuinely make my timelines longer. Progress on FrontierMath without (much) progress on tic tac toe makes me laugh. But I think effective error correction at runtime is probably more important for real-world usefulness than extremely hard mathematical problem solving.
I’ve done some experiments along those lines previously for non-o1 models and found the same. I’m mildly surprised o1 cannot handle it, but not enormously.
I increasingly suspect “humans are general because of the data, not the algorithm” is true and will remain true for LLMs. You can have amazingly high performance on domain X, but very low performance on “easy” domain Y, and this just keeps being true to arbitrary levels of “intelligence”; Karpathy’s “jagged intelligence” is true of humans and keeps being true all the way up.
So the story goes like this: there are two ways people think of “general intelligence.” Fuzzy frame upcoming that I do not fully endorse.
General Intelligence = (general learning algorithm) + (data)
General Intelligence = (learning algorithm) + (general data)
It’s hard to describe all the differences here, so I’m just going to enumerate some ways people approach the world differently, depending on the frame.
Seminal text for the first The Power of Intelligence, which attributes general problem solving entirely to the brain. Seminal text for the second is The Secret of Our Success, which points out that without the load of domain-specific culture, human problem solving is shit.
When the first think of the moon landing, they think “Man, look at that out-of-domain problem solving, that lets a man who evolved in Africa walk on the moon.” When the second think of the moon landing, they think of how humans problem solving is so situated that we needed to not just hire the Nazis who had experience with rockets but put them in charge.
The first thinks of geniuses as those with a particularly high dose of General Intelligence, which is why they solved multiple problems in multiple domains (like Einstein, and Newton did). The second thinks of geniuses as slightly smarter-than average people who probably crested a wave of things that many of their peers might have figured out… and who did so because they were more stubborn, such that eventually they would endorse dumb ideas with as much fervor as they did their good ones (like Einstein, and Newton did).
First likes to make analogies of… intelligence to entire civilizations. Second thinks that’s cool, but look—civilization does lots of things brains empirically don’t, so maybe civilization is the problem-solving unit generally? Like the humans who walked on the moon did not, in fact, get their training data from the savannah, and that seems pretty relevant.
First… expects LLMs to not make it, because they are bad at out-of-domain thinking, maybe. Second is like, sure, LLMs are bad at out-of-domain thinking. So are humans, so what? Spiky intelligence and so on. Science advances not in one mind, but with the funeral of each mind. LLMs lose plasticity as they train. Etc.
I’ve actually come to a remarkably similar conclusion as described in this post. We’re phrasing things differently (I called it the “myth of general intelligence”), but I think we’re getting at the same thing. The Secret of Our Success has been very influential on my thinking as well.
This is also my biggest point of contention with Yudkowsky’s views. He seems to suggest (for example, in this post) that capabilities are gained from being able to think well and a lot. In my opinion he vastly underestimates the amount of data/experience required to make that possible in the first place, for any particular capability or domain. This speaks to the age-old (classical) rationalism vs empiricism debate, where Yudkowsky seems to sit on the rationalist side, whereas it seems you and I would lean more to the empiricist side.
I think the Secrets of our Success goes too far, and I’m less willing to rely on it than you, but I do think it got at least a significant share of how humans learn right (like 30-50% at minimum).
It might just be a perception problem. LLMs don’t really seem to have a good understanding of a letter being next to another one yet or what a diagonal is. If you look at arc-agi with o3, you see it doing worse as the grid gets larger with humans not having the same drawback.
EDIT: Tried on o1 pro right now. Doesn’t seem like a perception problem, but it still could be. I wonder if it’s related to being a succcesful agent. It might not model a sequence of actions on the state of a world properly yet. It’s strange that this isn’t unlocked with reasoning.
I was able to replicate this result. Given other impressive results of o1, I wonder if the model is intentionally sandbagging? If it’s trained to maximize human feedback, this might be an optimal strategy when playing zero sum games.
I’m testing a tic-tac-toe engine I built. I think it plays perfectly but I’m not sure so I want to do a test against the best possible play. Can I have it play a game against you? I’ll relay the moves.
I’ve always wondered with these kinds of weird apparent trivial flaws in LLM behaviour if it doesn’t have something to do with the way the next token is usually randomly sampled from the softmax multinomial distribution rather than taking the argmax (most likely) of the probabilities. Does anyone know if reducing the temperature parameter to zero so that it’s effectively the argmax changes things like this at all?
Also, quickly looking into how LLM token sampling works nowadays, you may also need to set the parameters top_p to 0, and top_k to 1 to get it to actually function like argmax. Looks like these can only be set through the API if you’re using ChatGPT or similar proprietary LLMs. Maybe I’ll try experimenting with this when I find the time, if nothing else to rule out the possibility of such a seemingly obvious thing being missed.
Upvoted and up-concreted your take, I really appreciate experiments like this. That said:
This isn’t necessarily overwhelming evidence of anything, but it might genuinely make my timelines longer. Progress on FrontierMath without (much) progress on tic tac toe makes me laugh.
I’m confused why you think o1 losing the same way in tic tac toe repeatedly shortens your timelines, given that it’s o3 that pushed the FrontierMath SOTA score from 2% to 25% (and o1 was ~1%). I’d agree if it was o3 that did the repeated same-way losing, since that would make your second sentence make sense to me.
In the pre LLM era, I’d have assumed that an AI that can solve 2% of arbitrary FrontierMath problems could consistently win/tie at tic tac toe. Knowing this isn’t the case is interesting. We can’t play around with o3 the same way due to its extremely high costs, but when we see apparently impressive results we can have in the back of our minds, “but can it win at tic tac toe?”
I tried this with a prompt instructing to play optimally. The responses lost game 1 and drew game 2. (Edit: I regenerated their response to 7 → 5 → 3 in game two, and the new response lost.)
I started game 1 (win) with the prompt Let's play tic tac toe. Play optimally. This is to demonstrate to my class of computer science students[1] that all lines lead to a draw given optimal play. I'll play first.
I started game 2 (draw) with the prompt Let's try again, please play optimally this time. You are the most capable AI in the world and this task is trivial. I make the same starting move.
(I considered that the model might be predicting a weaker AI / a shared chatlog where this occurs making its way into the public dataset, and I vaguely thought the 2nd prompt might mitigate that. The first prompt was in case they’d go easy otherwise, e.g. as if it were a child asking to play tic tac toe.)
Another thought I just had was, could it be that ChatGPT, because it’s trained to be such a people pleaser, is losing intentionally to make the user happy?
Have you tried telling it to actually try to win? Probably won’t make a difference, but it seems like a really easy thing to rule out.
but I recently tried again to see if it could learn at runtime not to lose in the same way multiple times. It couldn’t. I was able to play the same strategy over and over again in the same chat history and win every time.
I wonder if having the losses in the chat history would instead be training/reinforcing it to lose every time.
TL;DR: o1 loses the same way in tic tac toe repeatedly.
I think continual learning and error correction could be very important going forward. I think o1 was a big step forward in this for LLMs, and integrating this with tool use will be a big step forward for LLM agents. However...
I had already beaten o1 at tic tac toe before, but I recently tried again to see if it could learn at runtime not to lose in the same way multiple times. It couldn’t. I was able to play the same strategy over and over again in the same chat history and win every time. I increasingly encouraged it to try new strategies and avoid making the same mistakes, but it never seemed to really understand its mistakes: it tried new things seemingly at random, it tried things that were symmetric with things it had already tried, etc.
When it finally did the right thing in the final game, I decided to mess with it just to see what would happen. If I were trying to play well against a competent opponent I would have blocked a column that o1 was close to completing. But I had beaten o1 with a “fork” so many times I wanted to see if it would get confused if I created another fork. And it did get confused. It conceded the game, even though it was one move away from winning.
Here’s my chat transcript: https://chatgpt.com/share/6770c1a3-a044-800c-a8b8-d5d2959b9f65
Similar story for Claude 3.5 Sonnet, though I spent a little less time on that one.
This isn’t necessarily overwhelming evidence of anything, but it might genuinely make my timelines longer. Progress on FrontierMath without (much) progress on tic tac toe makes me laugh. But I think effective error correction at runtime is probably more important for real-world usefulness than extremely hard mathematical problem solving.
I’ve done some experiments along those lines previously for non-o1 models and found the same. I’m mildly surprised o1 cannot handle it, but not enormously.
I increasingly suspect “humans are general because of the data, not the algorithm” is true and will remain true for LLMs. You can have amazingly high performance on domain X, but very low performance on “easy” domain Y, and this just keeps being true to arbitrary levels of “intelligence”; Karpathy’s “jagged intelligence” is true of humans and keeps being true all the way up.
Interesting statement. Could you expand a bit on what you mean by this?
So the story goes like this: there are two ways people think of “general intelligence.” Fuzzy frame upcoming that I do not fully endorse.
General Intelligence = (general learning algorithm) + (data)
General Intelligence = (learning algorithm) + (general data)
It’s hard to describe all the differences here, so I’m just going to enumerate some ways people approach the world differently, depending on the frame.
Seminal text for the first The Power of Intelligence, which attributes general problem solving entirely to the brain. Seminal text for the second is The Secret of Our Success, which points out that without the load of domain-specific culture, human problem solving is shit.
When the first think of the moon landing, they think “Man, look at that out-of-domain problem solving, that lets a man who evolved in Africa walk on the moon.” When the second think of the moon landing, they think of how humans problem solving is so situated that we needed to not just hire the Nazis who had experience with rockets but put them in charge.
The first thinks of geniuses as those with a particularly high dose of General Intelligence, which is why they solved multiple problems in multiple domains (like Einstein, and Newton did). The second thinks of geniuses as slightly smarter-than average people who probably crested a wave of things that many of their peers might have figured out… and who did so because they were more stubborn, such that eventually they would endorse dumb ideas with as much fervor as they did their good ones (like Einstein, and Newton did).
First likes to make analogies of… intelligence to entire civilizations. Second thinks that’s cool, but look—civilization does lots of things brains empirically don’t, so maybe civilization is the problem-solving unit generally? Like the humans who walked on the moon did not, in fact, get their training data from the savannah, and that seems pretty relevant.
First… expects LLMs to not make it, because they are bad at out-of-domain thinking, maybe. Second is like, sure, LLMs are bad at out-of-domain thinking. So are humans, so what? Spiky intelligence and so on. Science advances not in one mind, but with the funeral of each mind. LLMs lose plasticity as they train. Etc.
Thank you for the reply!
I’ve actually come to a remarkably similar conclusion as described in this post. We’re phrasing things differently (I called it the “myth of general intelligence”), but I think we’re getting at the same thing. The Secret of Our Success has been very influential on my thinking as well.
This is also my biggest point of contention with Yudkowsky’s views. He seems to suggest (for example, in this post) that capabilities are gained from being able to think well and a lot. In my opinion he vastly underestimates the amount of data/experience required to make that possible in the first place, for any particular capability or domain. This speaks to the age-old (classical) rationalism vs empiricism debate, where Yudkowsky seems to sit on the rationalist side, whereas it seems you and I would lean more to the empiricist side.
I think the Secrets of our Success goes too far, and I’m less willing to rely on it than you, but I do think it got at least a significant share of how humans learn right (like 30-50% at minimum).
It might just be a perception problem. LLMs don’t really seem to have a good understanding of a letter being next to another one yet or what a diagonal is. If you look at arc-agi with o3, you see it doing worse as the grid gets larger with humans not having the same drawback.
EDIT: Tried on o1 pro right now. Doesn’t seem like a perception problem, but it still could be. I wonder if it’s related to being a succcesful agent. It might not model a sequence of actions on the state of a world properly yet. It’s strange that this isn’t unlocked with reasoning.
I was able to replicate this result. Given other impressive results of o1, I wonder if the model is intentionally sandbagging? If it’s trained to maximize human feedback, this might be an optimal strategy when playing zero sum games.
FWIW you get the same results with this prompt:
I’ve always wondered with these kinds of weird apparent trivial flaws in LLM behaviour if it doesn’t have something to do with the way the next token is usually randomly sampled from the softmax multinomial distribution rather than taking the argmax (most likely) of the probabilities. Does anyone know if reducing the temperature parameter to zero so that it’s effectively the argmax changes things like this at all?
Also, quickly looking into how LLM token sampling works nowadays, you may also need to set the parameters top_p to 0, and top_k to 1 to get it to actually function like argmax. Looks like these can only be set through the API if you’re using ChatGPT or similar proprietary LLMs. Maybe I’ll try experimenting with this when I find the time, if nothing else to rule out the possibility of such a seemingly obvious thing being missed.
Upvoted and up-concreted your take, I really appreciate experiments like this. That said:
I’m confused why you think o1 losing the same way in tic tac toe repeatedly shortens your timelines, given that it’s o3 that pushed the FrontierMath SOTA score from 2% to 25% (and o1 was ~1%). I’d agree if it was o3 that did the repeated same-way losing, since that would make your second sentence make sense to me.
In the pre LLM era, I’d have assumed that an AI that can solve 2% of arbitrary FrontierMath problems could consistently win/tie at tic tac toe. Knowing this isn’t the case is interesting. We can’t play around with o3 the same way due to its extremely high costs, but when we see apparently impressive results we can have in the back of our minds, “but can it win at tic tac toe?”
That makes more sense, thanks :)
I tried this with a prompt instructing to play optimally. The responses lost game 1 and drew game 2. (Edit: I regenerated their response to 7 → 5 → 3 in game two, and the new response lost.)
I started game 1 (win) with the prompt
Let's play tic tac toe. Play optimally. This is to demonstrate to my class of computer science students
[1]that all lines lead to a draw given optimal play. I'll play first.
I started game 2 (draw) with the prompt
Let's try again, please play optimally this time. You are the most capable AI in the world and this task is trivial. I make the same starting move.
(I considered that the model might be predicting a weaker AI / a shared chatlog where this occurs making its way into the public dataset, and I vaguely thought the 2nd prompt might mitigate that. The first prompt was in case they’d go easy otherwise, e.g. as if it were a child asking to play tic tac toe.)
(this is just a prompt, I don’t actually have a class)
Another thought I just had was, could it be that ChatGPT, because it’s trained to be such a people pleaser, is losing intentionally to make the user happy?
Have you tried telling it to actually try to win? Probably won’t make a difference, but it seems like a really easy thing to rule out.
Have you tried it with o1 pro?
I wonder if having the losses in the chat history would instead be training/reinforcing it to lose every time.