If you mean optimal as in “maximizing accuracy given the processing power”, then yes. But in terms of “maximizing accuracy given the data”, then Bayesian reasoning is optimal from the definition of conditional probability.
Maximizing accuracy given available processing power and available data is the core problem when it comes to finding a good decision theory.
We don’t ask what decision theory God should use but what decision theory humans should use.
Both go and chess are NP-hard and with can’t be fully processed even if you have a computer build from all atoms in the universe.
You’re confusing optimality in terms of results and efficiency in terms of computing power with your use of “NP-hard”. Something like the travelling salesman problem is NP-hard in that there’s no known way to solve them beyond a certain efficiency in terms of computing power (how to do optimally on them in terms of results is easy). It doesn’t apply to chess or go in that there is no known way to get optimal results no matter how much computing power you have. These are two completely different things.
Surely there is a known way to play chess and go optimally (in the sense of always either winning or forcing a draw). You just search through the entire game tree, instead of a sub-tree, using the standard minimax algorithm to choose the best move each turn. This is obviously completely computationally infeasible, but possible in principle. See Solved game
It would be extraordinary if the algorithm that is optimal given infinite computational resource is also optimal given limited resource.
I suspect that by framing this as a battle between Bayesian inference and actual evolved human algorithms, we are missing the third alternative: algorithm X, which is the optimal algorithm for decision-making given the resources and options that we have in the society that we find ourselves in.
The fact that human reasoning isn’t optimal implies in no way that the intelligently designed algorithm of Bayesian reasoning is better.
If you mean optimal as in “maximizing accuracy given the processing power”, then yes. But in terms of “maximizing accuracy given the data”, then Bayesian reasoning is optimal from the definition of conditional probability.
Maximizing accuracy given available processing power and available data is the core problem when it comes to finding a good decision theory.
We don’t ask what decision theory God should use but what decision theory humans should use. Both go and chess are NP-hard and with can’t be fully processed even if you have a computer build from all atoms in the universe.
You’re confusing optimality in terms of results and efficiency in terms of computing power with your use of “NP-hard”. Something like the travelling salesman problem is NP-hard in that there’s no known way to solve them beyond a certain efficiency in terms of computing power (how to do optimally on them in terms of results is easy). It doesn’t apply to chess or go in that there is no known way to get optimal results no matter how much computing power you have. These are two completely different things.
Surely there is a known way to play chess and go optimally (in the sense of always either winning or forcing a draw). You just search through the entire game tree, instead of a sub-tree, using the standard minimax algorithm to choose the best move each turn. This is obviously completely computationally infeasible, but possible in principle. See Solved game
Correct.
It would be extraordinary if the algorithm that is optimal given infinite computational resource is also optimal given limited resource.
I suspect that by framing this as a battle between Bayesian inference and actual evolved human algorithms, we are missing the third alternative: algorithm X, which is the optimal algorithm for decision-making given the resources and options that we have in the society that we find ourselves in.