I am confused by section 5 in the paper about probabilistic generation of the search tree—the paper states:
Testing showed that a naive implementation of probability-limited search is slightly (26 +- 12 rating points) stronger than a naive implementation of depth-limited search.
But the creators of the most popular engines literally spend hours a day trying to increase the rating of their engine, and 26 rating points is massive. Is this probabilistic search simply that unknown and good? Or is does the trick lie in the “stronger than a naive implementation of depth-limited search”, and is there some reason why we expect depth-limited search to have sophisticated implementations, but do not expect this for probablistic search?
Or is does the trick lie in the “stronger than a naive implementation of depth-limited search”, and is there some reason why we expect depth-limited search to have sophisticated implementations, but do not expect this for probablistic search?
Something like that I think. The paper suggests that optimizations applied to depth-based search techniques in more sophisticated engines are already effectively like an approximation of probability-based search.
Should in this case the probabilistic search not already be comparable in performance with non-naive depth-based search, if most of the sophistication in the latter just serves to approximate the former? Since the probabilistic search seems relatively simple the argument above seems insufficient to explain why probabilistic search is not used more widely, right?
I am confused by section 5 in the paper about probabilistic generation of the search tree—the paper states:
But the creators of the most popular engines literally spend hours a day trying to increase the rating of their engine, and 26 rating points is massive. Is this probabilistic search simply that unknown and good? Or is does the trick lie in the “stronger than a naive implementation of depth-limited search”, and is there some reason why we expect depth-limited search to have sophisticated implementations, but do not expect this for probablistic search?
Something like that I think. The paper suggests that optimizations applied to depth-based search techniques in more sophisticated engines are already effectively like an approximation of probability-based search.
Should in this case the probabilistic search not already be comparable in performance with non-naive depth-based search, if most of the sophistication in the latter just serves to approximate the former? Since the probabilistic search seems relatively simple the argument above seems insufficient to explain why probabilistic search is not used more widely, right?