Small nitpick with the vocabulary here. There is a difference between ‘strategic’ and ‘tactical’, which is particularly poignant in chess. Tactics is basically your ability to calculate and figure out puzzles. Finding a mate in 5 would be tactical. Strategy relates to things too big to calculate. For instance, creating certain pawn structures that you suspect will give you an advantage in a wide variety of likely scenarios, or placing a bishop in such a way that an opponent must play more defensively.
I wasn’t really sure which you were referring to here; it seems that you simply mean that GPT isn’t very good at playing strategy games in general; ie it’s bad at strategy AND tactics. My guess is that GPT is actually far better at strategy; it might have an okay understanding of what board state looks good and bad, but no consistent ability to run any sort of minimax to find a good move, even one turn ahead.
It didn’t even seem to understand what the goals of any of the games were, despite being able to explain it in natural language. So it wasn’t even at a point I could test a strategy v.s. tactics distinction.
Ha, no kidding. Honestly, it can’t even play chess. I just tried to play it, and asked it to draw the board state after each move. It started breaking on move 3, and deleted its own king. I guess I win? Here was its last output.
For my move, I’ll play Kxf8:
8 r n b q . b . .
7 p p p p . p p p
6 . . . . . n . .
5 . . . . p . . .
4 . . . . . . . .
3 . P . . . . . .
2 P . P P P P P P
1 R N . Q K B N R
a b c d e f g h
Apparently GPT-4 is only good at chess if it tell it not to explain anything (or show the board as it turns out). This also suggests that the chess part is separate from the rest.
Small nitpick with the vocabulary here. There is a difference between ‘strategic’ and ‘tactical’, which is particularly poignant in chess. Tactics is basically your ability to calculate and figure out puzzles. Finding a mate in 5 would be tactical. Strategy relates to things too big to calculate. For instance, creating certain pawn structures that you suspect will give you an advantage in a wide variety of likely scenarios, or placing a bishop in such a way that an opponent must play more defensively.
I wasn’t really sure which you were referring to here; it seems that you simply mean that GPT isn’t very good at playing strategy games in general; ie it’s bad at strategy AND tactics. My guess is that GPT is actually far better at strategy; it might have an okay understanding of what board state looks good and bad, but no consistent ability to run any sort of minimax to find a good move, even one turn ahead.
It didn’t even seem to understand what the goals of any of the games were, despite being able to explain it in natural language. So it wasn’t even at a point I could test a strategy v.s. tactics distinction.
Ha, no kidding. Honestly, it can’t even play chess. I just tried to play it, and asked it to draw the board state after each move. It started breaking on move 3, and deleted its own king. I guess I win? Here was its last output.
For my move, I’ll play Kxf8:
8 r n b q . b . .
7 p p p p . p p p
6 . . . . . n . .
5 . . . . p . . .
4 . . . . . . . .
3 . P . . . . . .
2 P . P P P P P P
1 R N . Q K B N R
a b c d e f g h
Apparently GPT-4 is only good at chess if it tell it not to explain anything (or show the board as it turns out). This also suggests that the chess part is separate from the rest.