The principle of alpha-beta pruning is, roughly, “search for moves until it’s clear that all other moves are bad”. It means that if you find sufficiently good move on first iteration of search, you just stop search and execute this move. And if you have sufficiently good search-ordering heuristics, you won’t have much of tree search.
The point I was trying to make certainly wasn’t that current search implementation necessarily look at every possibility. I am aware that they are heavily optimised, I have implemented Alpha-Beta-Pruning myself.
My point is that humans use structure that is specific to a problem and potentially new and unique to narrow down the search space. None of what currently exists in search pruning compares even remotely.
Which is why all these systems use orders of magnitude more search than humans (even those with Alpha-Beta-Pruning). And this is also why all these systems are narrow enough that you can exploit the structure that is always there to optimise the search.
Because I just stumbled upon this article. Here is Melanie Mitchell’s version of this point:
To me, this is reminiscent of the comparison between computer and human chess players. Computer players get a lot of their ability from the amount of look-ahead search they can do, applying their brute-force computational powers, whereas good human chess players actually don’t do that much search, but rather use their capacity for abstraction to understand the kind of board position they’re faced with and to plan what move to make.
The better one is at abstraction, the less search one has to do.
I don’t think you actually prove your point.
The principle of alpha-beta pruning is, roughly, “search for moves until it’s clear that all other moves are bad”. It means that if you find sufficiently good move on first iteration of search, you just stop search and execute this move. And if you have sufficiently good search-ordering heuristics, you won’t have much of tree search.
The point I was trying to make certainly wasn’t that current search implementation necessarily look at every possibility. I am aware that they are heavily optimised, I have implemented Alpha-Beta-Pruning myself.
My point is that humans use structure that is specific to a problem and potentially new and unique to narrow down the search space. None of what currently exists in search pruning compares even remotely.
Which is why all these systems use orders of magnitude more search than humans (even those with Alpha-Beta-Pruning). And this is also why all these systems are narrow enough that you can exploit the structure that is always there to optimise the search.
Because I just stumbled upon this article. Here is Melanie Mitchell’s version of this point: