For two hours yesterday, I watched the twitch channel ClaudePlaysPokemon, which shows a Claude 3.7 Sonnet agent playing through the game Pokémon Red. Below I list some limitations I observed with the Claude agent. I think many of these limitations will carry over to agents built on other current frontier models and on other agentic tasks.
Claude has poor visual understanding. For example, Claude would often think it was next to a character when it was clearly far away from it. It would also often misidentify objects in its field of vision
The Claude agent is extremely slow. It tends to need to think for hundreds of tokens just to move one or two spaces or press A a single time. For example, it can sometimes take over a minute for the model to walk into a Pokemon center from the entrance and heal its Pokemon. Speed will become less of a problem as inference times continue to increase, but I suspect it will still be necessary for future agents to distill common actions to not require thinking and to allow the model to reason over higher-level actions instead of each individual one
The 200K token context window is a significant bottleneck. This was surprising to me since I had thought a context window that can store a medium-length book should be able to hold a large enough chunk of a game in memory to not be a significant problem. But when the model is outputting ~200 tokens per each action which is often a single button press, and when it regularly processes screenshots of the game which require an unknown but likely large number of tokens, the context window can fill up quite fast. At one point I measured it to take ~7.5 minutes to fill up the context window, which due to the model’s slowness was only enough to leave a Pokemon center, fight a few Pokemon, and then go back to the center.
Even though Claude makes a lot of mistakes within a context window, its biggest mistakes occur when it needs to accomplish tasks that span multiple context windows. Because the context window is too small, the model often forgets things it did very recently and gets stuck into loops. This makes it very bad at tasks that require systematic exploration. In one particularly infuriating sequence, Claude entered a house, talked to its residents, left the house, explored left and came back, and did this over and over again for tens of minutes because it kept forgetting what it had already done
The agent has tools to help it avoid this, like adding things to an external knowledge base, and summarizing what it did to paste into the next context window. But it doesn’t know how to use these very well, perhaps because the model is just using it in ways that seem reasonable, and not in ways that have empirically led to good performance in the past
Claude can’t learn new behaviors on the fly. One useful skill humans have is that we can learn certain strategies and high-level actions quickly after only a small amount of experience. But Claude needs to re-learn what to do essentially every time, which makes virtually every task, even ones Claude has done hundreds of times before, have some non-negligible chance of failure. While in principle the external knowledge base can help with this, it doesn’t appear to in practice
Claude often has bad in-game decision-making and lacks creativity. While the model often thinks of ideas that seem roughly reasonable given what it knows, it often loses simple opportunities to do things quicker or more easily. It also tends to get stuck with its initial idea (as long as it remembers it) even when something slightly different would work better. For example, at one point Claude decided it wanted to level up its lowest-leveled Pokemon, so every time it died, it took the long walk back to the Pokecenter to heal, even though it would’ve made sense to spend some time leveling up its other low-level characters before making the return trip. Claude sometimes has very good ideas, but because it can’t learn new behaviors on the fly, the good ideas never get reinforced
All this being said, ClaudePlaysPokemon still impresses me, and is probably the most impressive LLM agent demonstration I’ve seen. Through reasoning and persistence, Claude is able to progress fairly far in the game, accomplish tasks requiring thousands of steps, and eventually get out of loops even when it’s been stuck for a long time. I expect increased agentic RL training, increased cross-context RL training, and test-time learning to iron out a lot of these limitations over the next year or two.
The 200K token context window is a significant bottleneck.
Gemini Pro has a 2 million token context window, so I assume it would do significantly better. (I wonder why no other model has come close to the Gemini context window size. I have to assume not all algorithmic breakthroughs are replicated a few months later by other models.)
For two hours yesterday, I watched the twitch channel ClaudePlaysPokemon, which shows a Claude 3.7 Sonnet agent playing through the game Pokémon Red. Below I list some limitations I observed with the Claude agent. I think many of these limitations will carry over to agents built on other current frontier models and on other agentic tasks.
Claude has poor visual understanding. For example, Claude would often think it was next to a character when it was clearly far away from it. It would also often misidentify objects in its field of vision
The Claude agent is extremely slow. It tends to need to think for hundreds of tokens just to move one or two spaces or press A a single time. For example, it can sometimes take over a minute for the model to walk into a Pokemon center from the entrance and heal its Pokemon. Speed will become less of a problem as inference times continue to increase, but I suspect it will still be necessary for future agents to distill common actions to not require thinking and to allow the model to reason over higher-level actions instead of each individual one
The 200K token context window is a significant bottleneck. This was surprising to me since I had thought a context window that can store a medium-length book should be able to hold a large enough chunk of a game in memory to not be a significant problem. But when the model is outputting ~200 tokens per each action which is often a single button press, and when it regularly processes screenshots of the game which require an unknown but likely large number of tokens, the context window can fill up quite fast. At one point I measured it to take ~7.5 minutes to fill up the context window, which due to the model’s slowness was only enough to leave a Pokemon center, fight a few Pokemon, and then go back to the center.
Even though Claude makes a lot of mistakes within a context window, its biggest mistakes occur when it needs to accomplish tasks that span multiple context windows. Because the context window is too small, the model often forgets things it did very recently and gets stuck into loops. This makes it very bad at tasks that require systematic exploration. In one particularly infuriating sequence, Claude entered a house, talked to its residents, left the house, explored left and came back, and did this over and over again for tens of minutes because it kept forgetting what it had already done
The agent has tools to help it avoid this, like adding things to an external knowledge base, and summarizing what it did to paste into the next context window. But it doesn’t know how to use these very well, perhaps because the model is just using it in ways that seem reasonable, and not in ways that have empirically led to good performance in the past
Claude can’t learn new behaviors on the fly. One useful skill humans have is that we can learn certain strategies and high-level actions quickly after only a small amount of experience. But Claude needs to re-learn what to do essentially every time, which makes virtually every task, even ones Claude has done hundreds of times before, have some non-negligible chance of failure. While in principle the external knowledge base can help with this, it doesn’t appear to in practice
Claude often has bad in-game decision-making and lacks creativity. While the model often thinks of ideas that seem roughly reasonable given what it knows, it often loses simple opportunities to do things quicker or more easily. It also tends to get stuck with its initial idea (as long as it remembers it) even when something slightly different would work better. For example, at one point Claude decided it wanted to level up its lowest-leveled Pokemon, so every time it died, it took the long walk back to the Pokecenter to heal, even though it would’ve made sense to spend some time leveling up its other low-level characters before making the return trip. Claude sometimes has very good ideas, but because it can’t learn new behaviors on the fly, the good ideas never get reinforced
All this being said, ClaudePlaysPokemon still impresses me, and is probably the most impressive LLM agent demonstration I’ve seen. Through reasoning and persistence, Claude is able to progress fairly far in the game, accomplish tasks requiring thousands of steps, and eventually get out of loops even when it’s been stuck for a long time. I expect increased agentic RL training, increased cross-context RL training, and test-time learning to iron out a lot of these limitations over the next year or two.
Gemini Pro has a 2 million token context window, so I assume it would do significantly better. (I wonder why no other model has come close to the Gemini context window size. I have to assume not all algorithmic breakthroughs are replicated a few months later by other models.)
Does it really work on RULER( benchmark from Nvidia)?
Not sure where but saw some controversies, https://arxiv.org/html/2410.18745v1#S1 is best I did find now...
Edit: Aah, this was what I had on mind: https://www.reddit.com/r/LocalLLaMA/comments/1io3hn2/nolima_longcontext_evaluation_beyond_literal/
I assume for Pokémon the model doesn’t need to remember everything exactly, so the recall quality may be less important than the quantity.