If our algorithm for solving many problems shows itself “kind of meh” compared to specialized algorithms, then what prevents us from increasing the resource budget and collecting all our specialized algorithms into one big one?
Consider an algorithm specialized to chess in particular, as compared to a more general algorithm that plays games in general.
The chess specific algorithm can have a large advantage, for the same amount of compute, by containing precomputed data specific to chess, for example a table of good opening moves.
All that precomputation must have happened somewhere, possibly in human brains. So there exists a general algorithm that could be told the rules of chess, do a large amount of precomputation, and then will play equally good chess for the same resources as the specialized chess algorithm.
No algorithm can contain precomputed values for all possible games, as there are infinitely many / exponentially many of those.
I don’t understand how this contradicts anything? As soon as you let loose some of the physical constraints, you can start to pile up precomputation/memory/ budget/volume/whatever. If you spend all of this to solve one task, then, well, you should get higher performance than any other approach that doesn’t focus on one thing. Or, you can make an algorithm that can outperform anything that you’ve made before. Given enough of any kind of unconstrained resource.
Consider an algorithm specialized to chess in particular, as compared to a more general algorithm that plays games in general.
The chess specific algorithm can have a large advantage, for the same amount of compute, by containing precomputed data specific to chess, for example a table of good opening moves.
All that precomputation must have happened somewhere, possibly in human brains. So there exists a general algorithm that could be told the rules of chess, do a large amount of precomputation, and then will play equally good chess for the same resources as the specialized chess algorithm.
No algorithm can contain precomputed values for all possible games, as there are infinitely many / exponentially many of those.
I don’t understand how this contradicts anything? As soon as you let loose some of the physical constraints, you can start to pile up precomputation/memory/ budget/volume/whatever. If you spend all of this to solve one task, then, well, you should get higher performance than any other approach that doesn’t focus on one thing. Or, you can make an algorithm that can outperform anything that you’ve made before. Given enough of any kind of unconstrained resource.
Precompute is just another resource