FEN is essentially the same thing as that, but better. Try to “think as a GPT”—if you’re a fundamentally textual mind, then a no-frills, standardized representation that compresses all required information in a few well-known tokens will be ideal. With a custom representation it might instead have to learn it, the Unicode chess symbols may be unusual tokens, and any added tabulation or decoration is more of a source of confusion than anything else. It improves clarity for us humans, because we’re highly visual beings, not necessarily for a transformer. Text does that a lot more straighforwardly, and if it’s something that is likely to have appeared a lot in the training set, all the better.
I get the argument, but I’m not sure it’s true. There might be enough Unicode chessboards on the internet that it has learned the basics of the representation, and it might be able to transfer-learn some strategies it sees in other notations to become good at Unicode chessboards, and a transformer might be able to exploit the geometry of the chessboard. Not sure.
Either FEN or a unicode chessboard could be interesting. Comparing both could be interesting too.
It’s a good thought, and I had the same one a while ago, but I think dr_s is right here; FEN isn’t helpful to GPT-3.5 because it hasn’t seen many FENs in its training, and it just tends to bungle it.
GPT-3.5 has trouble from the start maintaining a correct FEN, and makes its first illegal move on move 7, and starts making many illegal moves around move 13.
Ah, dang it. So it’s a damned if you do, damned if you don’t—it has seen lots of scores, but they’re computationally difficult to keep track of since they’re basically “diffs” of the board state. But there’s not enough FEN or other board notation going around for it to have learned to use that reliably. It cuts at the heart of one of the key things that hold back GPT from generality—it seems like it needs to learn each thing separately, and doesn’t transfer skills that well. If not for this, honestly, I’d call it AGI already in terms of the sheer scope of the things it can do.
FEN is essentially the same thing as that, but better. Try to “think as a GPT”—if you’re a fundamentally textual mind, then a no-frills, standardized representation that compresses all required information in a few well-known tokens will be ideal. With a custom representation it might instead have to learn it, the Unicode chess symbols may be unusual tokens, and any added tabulation or decoration is more of a source of confusion than anything else. It improves clarity for us humans, because we’re highly visual beings, not necessarily for a transformer. Text does that a lot more straighforwardly, and if it’s something that is likely to have appeared a lot in the training set, all the better.
I get the argument, but I’m not sure it’s true. There might be enough Unicode chessboards on the internet that it has learned the basics of the representation, and it might be able to transfer-learn some strategies it sees in other notations to become good at Unicode chessboards, and a transformer might be able to exploit the geometry of the chessboard. Not sure.
Either FEN or a unicode chessboard could be interesting. Comparing both could be interesting too.
It’s a good thought, and I had the same one a while ago, but I think dr_s is right here; FEN isn’t helpful to GPT-3.5 because it hasn’t seen many FENs in its training, and it just tends to bungle it.
Lichess study, ChatGPT conversation link
GPT-3.5 has trouble from the start maintaining a correct FEN, and makes its first illegal move on move 7, and starts making many illegal moves around move 13.
Apparently it also bungles the unicode representation: https://chat.openai.com/share/10b8b0d3-7c80-427a-aaf7-ea370f3a471b
Ah, dang it. So it’s a damned if you do, damned if you don’t—it has seen lots of scores, but they’re computationally difficult to keep track of since they’re basically “diffs” of the board state. But there’s not enough FEN or other board notation going around for it to have learned to use that reliably. It cuts at the heart of one of the key things that hold back GPT from generality—it seems like it needs to learn each thing separately, and doesn’t transfer skills that well. If not for this, honestly, I’d call it AGI already in terms of the sheer scope of the things it can do.