Agreed, although that it turn makes me wonder why it does perform a bit better than random. Maybe there is some nondeclarative knowledge about the image, or some blurred position information? I might test next how much vision is bottlenecking here by providing a text representation of the grid, as in Ryan Greenblatt’s work on ARC-AGI.
If we consider each (include,exclude) decision for (1,2,3,4,5) as a separate question, error rates are 20%-ish. Much better than random guessing. So why does it make mistakes?
If bottlenecking on data is the problem, more data in the image should kill performance. So how about a grid of 3 digit numbers (random vals in range 100...999)?
3.5 sonnet does perfectly. Perfect score answering lookup(row/col) and (find_row_col(number), find duplicates and transcription to CSV.
So this isn’t a bottleneck like human working memory. Maybe we need to use a higher resolution image so it has more tokens to “think” with? That doesn’t seem to work either for the above yellow areas thing either though.
I’m guessing this is straightforward failure to generalize. Tables of numbers are well represented in the training data (possibly synthetic data too), visual geometry puzzles, not so much. The model has learned a few visual algorithms but hasn’t been forced to generalise yet.
Root cause might be some stupid performance thing that screws up image perception the same way BPE text encoding messes up byte level text perception. I’m guessing sparse attention.
Text representations
Text representations are no panacea. Often similar problems (EG:rotate this grid) have very formatting dependent performance. Looking for sub-tasks that are probably more common in training data and composing with those (EG:rotate by composing transpose and mirror operations) allows a model to do tasks it otherwise couldn’t. Text has generalisation issues just like with images.
I would expect that they fare much better with a text representation. I’m not too familiar with how multimodality works exactly, but kind of assume that “vision” works very differently from our intuitive understanding of it. When we are asked such a question, we look at the image and start scanning it with the problem in mind. Whereas transformers seem like they just have some rather vague “conceptual summary” of the image available, with many details, but maybe not all for any possible question, and then have to work with that very limited representation.
Maybe somebody more knowledgeable can comment on how accurate that is. And whether we can expect scaling to eventually just basically solve this problem, or some different mitigation will be needed.
Agreed, although that it turn makes me wonder why it does perform a bit better than random. Maybe there is some nondeclarative knowledge about the image, or some blurred position information? I might test next how much vision is bottlenecking here by providing a text representation of the grid, as in Ryan Greenblatt’s work on ARC-AGI.
If we consider each (include,exclude) decision for (1,2,3,4,5) as a separate question, error rates are 20%-ish. Much better than random guessing. So why does it make mistakes?
If bottlenecking on data is the problem, more data in the image should kill performance. So how about a grid of 3 digit numbers (random vals in range 100...999)?
3.5 sonnet does perfectly. Perfect score answering lookup(row/col) and (find_row_col(number), find duplicates and transcription to CSV.
So this isn’t a bottleneck like human working memory. Maybe we need to use a higher resolution image so it has more tokens to “think” with? That doesn’t seem to work either for the above yellow areas thing either though.
I’m guessing this is straightforward failure to generalize. Tables of numbers are well represented in the training data (possibly synthetic data too), visual geometry puzzles, not so much. The model has learned a few visual algorithms but hasn’t been forced to generalise yet.
Root cause might be some stupid performance thing that screws up image perception the same way BPE text encoding messes up byte level text perception. I’m guessing sparse attention.
Text representations
Text representations are no panacea. Often similar problems (EG:rotate this grid) have very formatting dependent performance. Looking for sub-tasks that are probably more common in training data and composing with those (EG:rotate by composing transpose and mirror operations) allows a model to do tasks it otherwise couldn’t. Text has generalisation issues just like with images.
If Claude3.5 sonnet has pushed the frontier in tetris that would be evidence for generalisation. I predict it still fails badly.
I would expect that they fare much better with a text representation. I’m not too familiar with how multimodality works exactly, but kind of assume that “vision” works very differently from our intuitive understanding of it. When we are asked such a question, we look at the image and start scanning it with the problem in mind. Whereas transformers seem like they just have some rather vague “conceptual summary” of the image available, with many details, but maybe not all for any possible question, and then have to work with that very limited representation. Maybe somebody more knowledgeable can comment on how accurate that is. And whether we can expect scaling to eventually just basically solve this problem, or some different mitigation will be needed.