GPT-3.5 isn’t multimodal, so can’t really do that; I do wonder whether it would make GPT-4′s performance even better, though.
That said, this being a text-only model, really the only relevant information that would improve the situation is a freeze frame of the current state of the chessboard, expressed in any way—visuals just happen to work best for us, but GPT’s natural domain is the written word. So the correct test would probably be to replace the score (which requires computation to reconstruct the board state from scratch) with some kind of notation to instead represent the current board, for example by using Forsyth-Edwards Notation. I’d like to see if that makes it play well for longer (also, it shortens the prompts, so it should avoid running out of context window).
FEN is essentially the same thing as that, but better. Try to “think as a GPT”—if you’re a fundamentally textual mind, then a no-frills, standardized representation that compresses all required information in a few well-known tokens will be ideal. With a custom representation it might instead have to learn it, the Unicode chess symbols may be unusual tokens, and any added tabulation or decoration is more of a source of confusion than anything else. It improves clarity for us humans, because we’re highly visual beings, not necessarily for a transformer. Text does that a lot more straighforwardly, and if it’s something that is likely to have appeared a lot in the training set, all the better.
I get the argument, but I’m not sure it’s true. There might be enough Unicode chessboards on the internet that it has learned the basics of the representation, and it might be able to transfer-learn some strategies it sees in other notations to become good at Unicode chessboards, and a transformer might be able to exploit the geometry of the chessboard. Not sure.
Either FEN or a unicode chessboard could be interesting. Comparing both could be interesting too.
It’s a good thought, and I had the same one a while ago, but I think dr_s is right here; FEN isn’t helpful to GPT-3.5 because it hasn’t seen many FENs in its training, and it just tends to bungle it.
GPT-3.5 has trouble from the start maintaining a correct FEN, and makes its first illegal move on move 7, and starts making many illegal moves around move 13.
Ah, dang it. So it’s a damned if you do, damned if you don’t—it has seen lots of scores, but they’re computationally difficult to keep track of since they’re basically “diffs” of the board state. But there’s not enough FEN or other board notation going around for it to have learned to use that reliably. It cuts at the heart of one of the key things that hold back GPT from generality—it seems like it needs to learn each thing separately, and doesn’t transfer skills that well. If not for this, honestly, I’d call it AGI already in terms of the sheer scope of the things it can do.
GPT-3.5 isn’t multimodal, so can’t really do that; I do wonder whether it would make GPT-4′s performance even better, though.
That said, this being a text-only model, really the only relevant information that would improve the situation is a freeze frame of the current state of the chessboard, expressed in any way—visuals just happen to work best for us, but GPT’s natural domain is the written word. So the correct test would probably be to replace the score (which requires computation to reconstruct the board state from scratch) with some kind of notation to instead represent the current board, for example by using Forsyth-Edwards Notation. I’d like to see if that makes it play well for longer (also, it shortens the prompts, so it should avoid running out of context window).
FEN is definitely an option. By “visual”, what I had in mind would be e.g. assembling an 8 by 8 grid of characters using e.g. https://en.m.wikipedia.org/wiki/Chess_symbols_in_Unicode
What I’m wondering is why people don’t do this.
FEN is essentially the same thing as that, but better. Try to “think as a GPT”—if you’re a fundamentally textual mind, then a no-frills, standardized representation that compresses all required information in a few well-known tokens will be ideal. With a custom representation it might instead have to learn it, the Unicode chess symbols may be unusual tokens, and any added tabulation or decoration is more of a source of confusion than anything else. It improves clarity for us humans, because we’re highly visual beings, not necessarily for a transformer. Text does that a lot more straighforwardly, and if it’s something that is likely to have appeared a lot in the training set, all the better.
I get the argument, but I’m not sure it’s true. There might be enough Unicode chessboards on the internet that it has learned the basics of the representation, and it might be able to transfer-learn some strategies it sees in other notations to become good at Unicode chessboards, and a transformer might be able to exploit the geometry of the chessboard. Not sure.
Either FEN or a unicode chessboard could be interesting. Comparing both could be interesting too.
It’s a good thought, and I had the same one a while ago, but I think dr_s is right here; FEN isn’t helpful to GPT-3.5 because it hasn’t seen many FENs in its training, and it just tends to bungle it.
Lichess study, ChatGPT conversation link
GPT-3.5 has trouble from the start maintaining a correct FEN, and makes its first illegal move on move 7, and starts making many illegal moves around move 13.
Apparently it also bungles the unicode representation: https://chat.openai.com/share/10b8b0d3-7c80-427a-aaf7-ea370f3a471b
Ah, dang it. So it’s a damned if you do, damned if you don’t—it has seen lots of scores, but they’re computationally difficult to keep track of since they’re basically “diffs” of the board state. But there’s not enough FEN or other board notation going around for it to have learned to use that reliably. It cuts at the heart of one of the key things that hold back GPT from generality—it seems like it needs to learn each thing separately, and doesn’t transfer skills that well. If not for this, honestly, I’d call it AGI already in terms of the sheer scope of the things it can do.