Ok, let’s go with chess. For that game there is an optimal balance between the tree search and the evaluation function. The search is exploratory. The evaluation is a score.
The evaluation can obviously predict the search to some degree. Humans are very bad at searching, still some can win against computers.
The search is decompressing the information to something more easily evaluated by a computer. A human can do it with much less expansion. Just a matter of Hardware or is it because the information was there all along and just needed a “smarter” analysis?
I may not have been clear enough. The evaluation _IS_ a search. The value of a position is exactly the value of a min-max adversarial search to a leaf (game end).
Compression and caching and prediction are ways to work around the fact that we don’t actually have the lookup table available.
Then you are wrong because since the search usually does not reach the chess mate state, there is always a scoring heuristic replacing the further exploration search at some dept.
I know and had read chessprogramming prior to your post, you are wrong to assume that I am a total idiot just because I got myself confused.
Didn’t mean to condescend, I was mostly pointing out that complexity is in the iteration of simple rules with a fairly wide branching. I will still argue that all the heuristics and evaluation mechanisms used by standard engines are effectively search predictions, useful only because the full search is infeasible, and because the full search results have not been memoized (in the not-so-giant-by-today’s-standards lookup table of position->value).
Ok, let’s go with chess. For that game there is an optimal balance between the tree search and the evaluation function. The search is exploratory. The evaluation is a score.
The evaluation can obviously predict the search to some degree. Humans are very bad at searching, still some can win against computers.
The search is decompressing the information to something more easily evaluated by a computer. A human can do it with much less expansion. Just a matter of Hardware or is it because the information was there all along and just needed a “smarter” analysis?
I may not have been clear enough. The evaluation _IS_ a search. The value of a position is exactly the value of a min-max adversarial search to a leaf (game end).
Compression and caching and prediction are ways to work around the fact that we don’t actually have the lookup table available.
Then you are wrong because since the search usually does not reach the chess mate state, there is always a scoring heuristic replacing the further exploration search at some dept.
I know and had read chessprogramming prior to your post, you are wrong to assume that I am a total idiot just because I got myself confused.
Didn’t mean to condescend, I was mostly pointing out that complexity is in the iteration of simple rules with a fairly wide branching. I will still argue that all the heuristics and evaluation mechanisms used by standard engines are effectively search predictions, useful only because the full search is infeasible, and because the full search results have not been memoized (in the not-so-giant-by-today’s-standards lookup table of position->value).
Put that way I completely agree.