Thanks for your comments. Yes, this chapter, like the previous two, glosses over a lot of details because my purpose isn’t to explain these topics in detail, but say just enough to sow the seeds of doubt about the certainty many people have. It’s a fine line to walk between oversimplification and giving too much detail. I probably still haven’t got it right! For example, I would kind of love it if I didn’t have to talk about Bayesians, but it just seemed the most straightforward way to contrast their ideal with our reality as humans. Maybe in later drafts I’ll find a better way to approach it.
In fact I think it is a bit misleading to talk about Bayesians this way. Bayesianism isn’t necessarily fully self-endorsing, so Bayesians can have self-trust issues too, and can get stuck in bad equilibria with themselves which resemble Akrasia. Indeed, the account of akrasia in Breakdown of Will still uses Bayesian rationality, although with a temporally inconsistent utility function.
It would seem (to me) less misleading to make the case that self-trust is a very general problem for rational agents, EG by sketching the Lobian obstacle, although I know you said you’re not super familiar with that stuff. But the general point is that using some epistemics or decision theory doesn’t imply endorsing it reflectively, similar to Godel’s point about the limits of logic. So “by default” you expect some disconnect; it doesn’t actually require a dual-process theory where there are two different systems conflicting. What a system reflectively endorses is already formally distinct from what it does.
Thanks for your comments. Yes, this chapter, like the previous two, glosses over a lot of details because my purpose isn’t to explain these topics in detail, but say just enough to sow the seeds of doubt about the certainty many people have. It’s a fine line to walk between oversimplification and giving too much detail. I probably still haven’t got it right! For example, I would kind of love it if I didn’t have to talk about Bayesians, but it just seemed the most straightforward way to contrast their ideal with our reality as humans. Maybe in later drafts I’ll find a better way to approach it.
In fact I think it is a bit misleading to talk about Bayesians this way. Bayesianism isn’t necessarily fully self-endorsing, so Bayesians can have self-trust issues too, and can get stuck in bad equilibria with themselves which resemble Akrasia. Indeed, the account of akrasia in Breakdown of Will still uses Bayesian rationality, although with a temporally inconsistent utility function.
It would seem (to me) less misleading to make the case that self-trust is a very general problem for rational agents, EG by sketching the Lobian obstacle, although I know you said you’re not super familiar with that stuff. But the general point is that using some epistemics or decision theory doesn’t imply endorsing it reflectively, similar to Godel’s point about the limits of logic. So “by default” you expect some disconnect; it doesn’t actually require a dual-process theory where there are two different systems conflicting. What a system reflectively endorses is already formally distinct from what it does.
Okay, I’ll try to look into it again. Thanks for the suggestion.
Yeah, that makes sense, I think that chapter accomplishes its purpose then.