I don’t think the intuition “both are huge” so “~ roughly equal” is correct.
Tree search is decomposable into specific sequence of a board states, which are easily readable; in practice trees are pruned, and can be pruned to human-readable sizes.
This isn’t true for the neural net. If you decompose the information in AlphaGo net into a huge list of arithmetic, if the “arithmetic” is the whole training process, the list is much larger than in the first case. If it’s just the trained net, it’s less interpretable than the tree.
I don’t think the intuition “both are huge” so “~ roughly equal” is correct.
Tree search is decomposable into specific sequence of a board states, which are easily readable; in practice trees are pruned, and can be pruned to human-readable sizes.
This isn’t true for the neural net. If you decompose the information in AlphaGo net into a huge list of arithmetic, if the “arithmetic” is the whole training process, the list is much larger than in the first case. If it’s just the trained net, it’s less interpretable than the tree.