Interesting. Note that Jon Edwards didn’t win a single game there via play—he won one game because the opponent inputted the wrong move, and another because the opponent quit the tournament. All other games were draws.
Agreed—as I said, the most important things are compute and dilligence. Just because a large fraction of the top games are draws doesn’t really say much about whether or not there is an edge being given by the humans (A large fraction of elite chess games are draws, but no-one doubts there are differences in skill level there). Really you’d want to see Jon Edward’s setup vs a completely untweaked engine being administered by a novice.
the most important things are compute and dilligence
I agree. Judging by the fact that AI is strongly superhuman in chess, the only winning strategy is to completely remove the human from the loop, and instead invest in as much compute for the AI as one can afford.
a sequence that no computer would consider or find
If it’s a sequence that no superhuman AI would consider, this means that the sequence is inferior to the much better sequences that the AI would consider.
It seems that even after 2 decades of the complete AI superiority, some top chess players are still imagining that they are in some ways better at chess than the AI, even if they can’t win against it.
If you look at the actual scenario there, the game was essentially in a stalemate, where the only possible way to win was to force the other player to advance a pawn. Stockfish can’t look 30 moves ahead to see that it’s possible to do that, so would have just flailed around.
You still need stockfish, because without it, any move you make could be a tactical error which the other players computer would pounce on. But stockfish can’t see the greater strategic picture if it’s beyond its tactical horizon.
This seems needlessly narrow minded. Just because AI is better than humans doesn’t make it uniformly better than humans in all subtasks of chess.
I don’t know enough about the specifics that this guy is talking about (I am not an expert) but I do know that until the release of NN-based algorithms most top players were still comfortable talking about positions where the computer was mis-evaluating positions soon out of the opening.
To take another more concrete example—computers were much better than humans in 2004, and yet Peter Leko still managed to refute a computer prepared line OTB in a world championship game.
Interesting. Note that Jon Edwards didn’t win a single game there via play—he won one game because the opponent inputted the wrong move, and another because the opponent quit the tournament. All other games were draws.
Agreed—as I said, the most important things are compute and dilligence. Just because a large fraction of the top games are draws doesn’t really say much about whether or not there is an edge being given by the humans (A large fraction of elite chess games are draws, but no-one doubts there are differences in skill level there). Really you’d want to see Jon Edward’s setup vs a completely untweaked engine being administered by a novice.
I agree. Judging by the fact that AI is strongly superhuman in chess, the only winning strategy is to completely remove the human from the loop, and instead invest in as much compute for the AI as one can afford.
If it’s a sequence that no superhuman AI would consider, this means that the sequence is inferior to the much better sequences that the AI would consider.
It seems that even after 2 decades of the complete AI superiority, some top chess players are still imagining that they are in some ways better at chess than the AI, even if they can’t win against it.
If you look at the actual scenario there, the game was essentially in a stalemate, where the only possible way to win was to force the other player to advance a pawn. Stockfish can’t look 30 moves ahead to see that it’s possible to do that, so would have just flailed around.
You still need stockfish, because without it, any move you make could be a tactical error which the other players computer would pounce on. But stockfish can’t see the greater strategic picture if it’s beyond its tactical horizon.
This seems needlessly narrow minded. Just because AI is better than humans doesn’t make it uniformly better than humans in all subtasks of chess.
I don’t know enough about the specifics that this guy is talking about (I am not an expert) but I do know that until the release of NN-based algorithms most top players were still comfortable talking about positions where the computer was mis-evaluating positions soon out of the opening.
To take another more concrete example—computers were much better than humans in 2004, and yet Peter Leko still managed to refute a computer prepared line OTB in a world championship game.