Hardly anyone seems to have access to the image-enabled GPT-4 yet, and thus far GPT-4 results on things like text art of Tic-tac-toe or Sudoku have not done well, so it doesn’t look like you can just ASCII-art your way out of problems like this yet.
I was a little surprised the ASCII-art approach didn’t work, because my original guess for the multimodal GPT-4 training had been that it was iGPT/DALL-E-1 style: just serializing images into visual tokens to train autoregressively on as a prefix. However, the GPT-4 architecture leak claims that it’s implemented in a completely different way: as a separate model which does cross-attention to GPT-4. So that helps deal with the problem of the visual tokens losing detail while using up a lot of context, but is also not something that necessarily would instill a lot of visual knowledge into the text-only model. (The text-only model might even be completely frozen and untrained, like Flamingo etc.)
Anyway, as I’ve been suggesting since 2019 or so, it’d probably be useful to train some small chess Transformers directly on both PGN and FEN (and mixtures thereof, like a FEN every n PGN moves) to help elucidate Transformer world-modeling. Any GPT-n result is as much a result about the proprietary internal OA datasets (and effects of BPEs) as it is about Transformer world-modeling or capabilities.
We collect chess game data from a one-month dump of the Lichess dataset, deliberately distinct from the month used in our own Lichess dataset. we design several model-based tasks including converting PGN to FEN, transferring UCI to FEN, and predicting legal moves, etc, resulting in 1.9M data samples.
Looks like they do a lot of things with FEN and train on a corpus which includes some FEN<->move-based-representation tasks, but they don’t really do anything which directly bears on this, aside from showing that their chess GPTs can turn UCI/sPGNs into FENs with high accuracy. That would seem to imply that it has learned a good model of chess, because otherwise how could it take a move-move-by-move description (like UCI/PGN) and print out an accurate summary of the final board state (like FEN)?
I would’ve preferred to see them do something like train FEN-only vs PGN-only chess GPTs (where the games are identical, just the encoding is different) and demonstrate the former is better, where ‘better’ means something like have a higher Elo or it doesn’t get worse towards the end of the game.
I did train a transformer to predict moves from board positions (not strictly FEN because with FEN positional encodings don’t point to the same squares consistently). Maybe I’ll get around to letting it compete against the different GPTs.
Something I always want to try is training a super tiny transformer on Tic Tac Toe, and see what it comes up with and how many games it needs to generalise.
Hardly anyone seems to have access to the image-enabled GPT-4 yet, and thus far GPT-4 results on things like text art of Tic-tac-toe or Sudoku have not done well, so it doesn’t look like you can just ASCII-art your way out of problems like this yet.
I was a little surprised the ASCII-art approach didn’t work, because my original guess for the multimodal GPT-4 training had been that it was iGPT/DALL-E-1 style: just serializing images into visual tokens to train autoregressively on as a prefix. However, the GPT-4 architecture leak claims that it’s implemented in a completely different way: as a separate model which does cross-attention to GPT-4. So that helps deal with the problem of the visual tokens losing detail while using up a lot of context, but is also not something that necessarily would instill a lot of visual knowledge into the text-only model. (The text-only model might even be completely frozen and untrained, like Flamingo etc.)
Anyway, as I’ve been suggesting since 2019 or so, it’d probably be useful to train some small chess Transformers directly on both PGN and FEN (and mixtures thereof, like a FEN every n PGN moves) to help elucidate Transformer world-modeling. Any GPT-n result is as much a result about the proprietary internal OA datasets (and effects of BPEs) as it is about Transformer world-modeling or capabilities.
The ChessGPT paper does something like that: https://arxiv.org/abs/2306.09200
I hadn’t seen that recent paper, thanks.
Looks like they do a lot of things with FEN and train on a corpus which includes some FEN<->move-based-representation tasks, but they don’t really do anything which directly bears on this, aside from showing that their chess GPTs can turn UCI/sPGNs into FENs with high accuracy. That would seem to imply that it has learned a good model of chess, because otherwise how could it take a move-move-by-move description (like UCI/PGN) and print out an accurate summary of the final board state (like FEN)?
I would’ve preferred to see them do something like train FEN-only vs PGN-only chess GPTs (where the games are identical, just the encoding is different) and demonstrate the former is better, where ‘better’ means something like have a higher Elo or it doesn’t get worse towards the end of the game.
I did train a transformer to predict moves from board positions (not strictly FEN because with FEN positional encodings don’t point to the same squares consistently). Maybe I’ll get around to letting it compete against the different GPTs.
Something I always want to try is training a super tiny transformer on Tic Tac Toe, and see what it comes up with and how many games it needs to generalise.