I get the argument, but I’m not sure it’s true. There might be enough Unicode chessboards on the internet that it has learned the basics of the representation, and it might be able to transfer-learn some strategies it sees in other notations to become good at Unicode chessboards, and a transformer might be able to exploit the geometry of the chessboard. Not sure.
Either FEN or a unicode chessboard could be interesting. Comparing both could be interesting too.
It’s a good thought, and I had the same one a while ago, but I think dr_s is right here; FEN isn’t helpful to GPT-3.5 because it hasn’t seen many FENs in its training, and it just tends to bungle it.
GPT-3.5 has trouble from the start maintaining a correct FEN, and makes its first illegal move on move 7, and starts making many illegal moves around move 13.
Ah, dang it. So it’s a damned if you do, damned if you don’t—it has seen lots of scores, but they’re computationally difficult to keep track of since they’re basically “diffs” of the board state. But there’s not enough FEN or other board notation going around for it to have learned to use that reliably. It cuts at the heart of one of the key things that hold back GPT from generality—it seems like it needs to learn each thing separately, and doesn’t transfer skills that well. If not for this, honestly, I’d call it AGI already in terms of the sheer scope of the things it can do.
I get the argument, but I’m not sure it’s true. There might be enough Unicode chessboards on the internet that it has learned the basics of the representation, and it might be able to transfer-learn some strategies it sees in other notations to become good at Unicode chessboards, and a transformer might be able to exploit the geometry of the chessboard. Not sure.
Either FEN or a unicode chessboard could be interesting. Comparing both could be interesting too.
It’s a good thought, and I had the same one a while ago, but I think dr_s is right here; FEN isn’t helpful to GPT-3.5 because it hasn’t seen many FENs in its training, and it just tends to bungle it.
Lichess study, ChatGPT conversation link
GPT-3.5 has trouble from the start maintaining a correct FEN, and makes its first illegal move on move 7, and starts making many illegal moves around move 13.
Apparently it also bungles the unicode representation: https://chat.openai.com/share/10b8b0d3-7c80-427a-aaf7-ea370f3a471b
Ah, dang it. So it’s a damned if you do, damned if you don’t—it has seen lots of scores, but they’re computationally difficult to keep track of since they’re basically “diffs” of the board state. But there’s not enough FEN or other board notation going around for it to have learned to use that reliably. It cuts at the heart of one of the key things that hold back GPT from generality—it seems like it needs to learn each thing separately, and doesn’t transfer skills that well. If not for this, honestly, I’d call it AGI already in terms of the sheer scope of the things it can do.