Although it’s not better than existing solutions, it’s a cool example of how good results can be achieved in a relatively automatic way—by contrast, the evaluation functions of the best chess engines have been carefully engineered and fine-tuned over many years, at least sometimes with assistance from people who are themselves master-level chess players. On the other hand this neural network approach took a relatively short time and could have been applied by someone with little chess skill.
edit: Reading the actual paper, it does sound like a certain amount of iteration, and expertise on the author’s part, was still required.
edit2: BTW, the paper is very clear and well written. I’d recommend giving it a read if you’re interested in the subject matter.
Although it’s not better than existing solutions, it’s a cool example of how good results can be achieved in a relatively automatic way—by contrast, the evaluation functions of the best chess engines have been carefully engineered and fine-tuned over many years, at least sometimes with assistance from people who are themselves master-level chess players. On the other hand this neural network approach took a relatively short time and could have been applied by someone with little chess skill.
But how much of its performance comes from the neural network learning some non-trivial evaluation function and how much comes from brute-forcing the game tree on a modern computer?
If the neural network was replaced by a trivial heuristic, say, material balance, how would the engine perform?
In the paper they start with just material balance—then via the learning process, their score on the evaluation test goes from “worse than all hand-written chess engines” to “better than all except the very best one” (and the best one, while more hand-crafted, also uses some ML/statistical tuning of numeric params, and has had a lot more effort put into it).
The reason why the NN solution currently doesn’t do as well in real games is because it’s slower to evaluate and therefore can’t brute-force as far.
Although it’s not better than existing solutions, it’s a cool example of how good results can be achieved in a relatively automatic way—by contrast, the evaluation functions of the best chess engines have been carefully engineered and fine-tuned over many years, at least sometimes with assistance from people who are themselves master-level chess players. On the other hand this neural network approach took a relatively short time and could have been applied by someone with little chess skill.
edit: Reading the actual paper, it does sound like a certain amount of iteration, and expertise on the author’s part, was still required.
edit2: BTW, the paper is very clear and well written. I’d recommend giving it a read if you’re interested in the subject matter.
Thank you for recommending to read the paper, I don’t think I would have otherwise and I greatly enjoyed reading it!
But how much of its performance comes from the neural network learning some non-trivial evaluation function and how much comes from brute-forcing the game tree on a modern computer?
If the neural network was replaced by a trivial heuristic, say, material balance, how would the engine perform?
In the paper they start with just material balance—then via the learning process, their score on the evaluation test goes from “worse than all hand-written chess engines” to “better than all except the very best one” (and the best one, while more hand-crafted, also uses some ML/statistical tuning of numeric params, and has had a lot more effort put into it).
The reason why the NN solution currently doesn’t do as well in real games is because it’s slower to evaluate and therefore can’t brute-force as far.