You might argue that bayesianism is conceptually useful, and thereby helps real humans reason better. But I think that concepts in bayesianism are primarily useful because they have suggestive names, which make it hard to realise how much work our intuitions are doing to translate from ideal bayesianism to our actual lives.
The valuable part of LW, for many people, is a collection of simple, practical insights into reasoning, rather than the complex technical framework. [...] The small practical insights [...] are all excellent. [...] I’d suggest that the Bayesian framework is not necessary to understand any of them, and perhaps not helpful (except maybe for “Update Yourself Incrementally”). Maybe this depends on one’s cognitive style. For some people, understanding that all those insights loosely relate to a mathematical framework would be satisfying and helpful; for others, the framework would be difficult to understand and an unnecessary distraction.
Yes, I saw Chapman’s critiques after someone linked one in the comments below, and broadly agree with them.
I also broadly agree with the conclusion that you quote; that seems fairly similar to what I was trying to get at in the second half of the post. But in the first half of the post, I was also trying to gesture at a mistake made not by people who want simple, practical insights, but rather people who do research in AI safety, learning human preferences, and so on, using mathematical models of near-ideal reasoning. However, it looks like making this critique thoroughly would require much more effort than I have time for.
I think some parts of it do—e.g. in this post. But yes, I do really like Chapman’s critique and wish I’d remembered about it before writing this so that I could reference it and build on it.
Especially: Understanding informal reasoning is probably more important than understanding technical methods. I very much agree with this.
If Bayesianism would work for an agent with arbitrary much cognitive power, the eternalism that Chapman criticizes would still be true. Christian belief in a God that escapes full human understanding is still eternalism.
This reminds me of an old critique of LW Bayesianism by David Chapman, and the conclusion that we reached in the comment section of it:
Yes, I saw Chapman’s critiques after someone linked one in the comments below, and broadly agree with them.
I also broadly agree with the conclusion that you quote; that seems fairly similar to what I was trying to get at in the second half of the post. But in the first half of the post, I was also trying to gesture at a mistake made not by people who want simple, practical insights, but rather people who do research in AI safety, learning human preferences, and so on, using mathematical models of near-ideal reasoning. However, it looks like making this critique thoroughly would require much more effort than I have time for.
Chapman’s critique was stronger. Chapman’s argument doesn’t depend on computational ability being finitive.
I think some parts of it do—e.g. in this post. But yes, I do really like Chapman’s critique and wish I’d remembered about it before writing this so that I could reference it and build on it.
Especially: Understanding informal reasoning is probably more important than understanding technical methods. I very much agree with this.
If Bayesianism would work for an agent with arbitrary much cognitive power, the eternalism that Chapman criticizes would still be true. Christian belief in a God that escapes full human understanding is still eternalism.
Probability theory does not extend logic is the post where Chapman makes that argument in more depth.