There’a no person who plays chess on a good level while employing Bayesian reasoning.
In Go Bayesian reasoning performs even worse. A good Go player makes some of his move simply because he appreciate their beauty and without having “rational” reasons for them.
Our brain is capable of doing very complex pattern matching that allows the best humans to be better at a large variety of tasks than computers which use rule based algorithms.
In chess or go idealized Bayesians just make the right move because they are logically omniscient.
Logical omniscience comes close to the perfect move but understanding the imperfections of the opponent can alter what the ideal move is slightly. This requires prior information that can not be derived logically (from the rules of the game).
If you argue that Bayesianism is only a good way to reason when you are omniscient and a bad idea for people who aren’t omniscient I can agree with your argument.
If you are however omniscient you don’t need much decision theory anyway.
There’s a bit of a difference between logical omniscience and vanilla omniscience: with logical omniscience, you can perfectly work out all the implications of all of the evidence you find, and with the other sort, you get to look a printout of the universe’s state.
I think you mean “cleanly constructed” or something like that. Minimax search doesn’t deal with uncertainty at all, whereas good human chess players presumably do so, causally model their opponents, and the like.
There’a no person who plays chess on a good level while employing Bayesian reasoning.
In Go Bayesian reasoning performs even worse. A good Go player makes some of his move simply because he appreciate their beauty and without having “rational” reasons for them. Our brain is capable of doing very complex pattern matching that allows the best humans to be better at a large variety of tasks than computers which use rule based algorithms.
In chess or go idealized Bayesians just make the right move because they are logically omniscient.
Logical omniscience comes close to the perfect move but understanding the imperfections of the opponent can alter what the ideal move is slightly. This requires prior information that can not be derived logically (from the rules of the game).
Idealized Bayesians don’t have to be logically omniscient—they can have a prior which assigns probability to logically impossible worlds.
If you argue that Bayesianism is only a good way to reason when you are omniscient and a bad idea for people who aren’t omniscient I can agree with your argument.
If you are however omniscient you don’t need much decision theory anyway.
There’s a bit of a difference between logical omniscience and vanilla omniscience: with logical omniscience, you can perfectly work out all the implications of all of the evidence you find, and with the other sort, you get to look a printout of the universe’s state.
But you don’t have any of those in the real world and therefore they shouldn’t factor into a discussion about effective decision making strategies.
You’ll never find perfect equality in the real world, so let’s abandon math.
You will never find evidence for the existence of God, so let’s abandon religion...
Yes! Already did!
Where’s the difference between believing in nonexistent logical omniscience and believing in nonexistent Gods?
I’d imagine Deep Blue is more approximately Bayesian that a human (search trees vs. giant crazy neural net).
I think you mean “cleanly constructed” or something like that. Minimax search doesn’t deal with uncertainty at all, whereas good human chess players presumably do so, causally model their opponents, and the like.