If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.