That’s not a bad writeup, a bit simplistic, but accessible to most aspiring rationalists. I like the reminder that we are not to trust ourselves unconditionally, the way we are prone to do when not thinking about it “I know/feel I’m right!” is an all too common argument. I would like to add that the System 1/System 2 model, or another “two minds” model is very much simplified, though it is an understandable first step toward modeling human inconsistency.
Now, a bit of an aside: the general principle is that to understand how something works, you don’t study it in equilibrium, because you miss all the essential stuff. You don’t see the hidden gears because they are nearly exactly compensated by a different set of gears. I cannot emphasize enough how much this error is made when trying to understand the human mind. What we see is “normal”, which is basically tiny deviations from equilibrium. To learn anything interesting about the mind you study minds out of equilibrium, and, because the IRB will never authorize “taking a mind out of equilibrium” with psychoactive substances, torture and emotional abuse (can’t blame them!) one has to study the subjects in vivo.
Back to the topic at hand. “Two minds” is what we glean from “normal” people. Those whose minds fell apart expose many more (broken) parts to the outside world, full as well as fragmented, and one can observe extreme versions of akrasia in those living with multiple personalities. Similarly, for the case of Bayesian updating, the interesting situations are for example, where some updating is explicitly punished, like in a cult-like setting, and this forced self-inconsistency and cognitive dissonance due to being prevented from “updating the most important beliefs to update”.
Thanks for your comments. Yes, this chapter, like the previous two, glosses over a lot of details because my purpose isn’t to explain these topics in detail, but say just enough to sow the seeds of doubt about the certainty many people have. It’s a fine line to walk between oversimplification and giving too much detail. I probably still haven’t got it right! For example, I would kind of love it if I didn’t have to talk about Bayesians, but it just seemed the most straightforward way to contrast their ideal with our reality as humans. Maybe in later drafts I’ll find a better way to approach it.
In fact I think it is a bit misleading to talk about Bayesians this way. Bayesianism isn’t necessarily fully self-endorsing, so Bayesians can have self-trust issues too, and can get stuck in bad equilibria with themselves which resemble Akrasia. Indeed, the account of akrasia in Breakdown of Will still uses Bayesian rationality, although with a temporally inconsistent utility function.
It would seem (to me) less misleading to make the case that self-trust is a very general problem for rational agents, EG by sketching the Lobian obstacle, although I know you said you’re not super familiar with that stuff. But the general point is that using some epistemics or decision theory doesn’t imply endorsing it reflectively, similar to Godel’s point about the limits of logic. So “by default” you expect some disconnect; it doesn’t actually require a dual-process theory where there are two different systems conflicting. What a system reflectively endorses is already formally distinct from what it does.
That’s not a bad writeup, a bit simplistic, but accessible to most aspiring rationalists. I like the reminder that we are not to trust ourselves unconditionally, the way we are prone to do when not thinking about it “I know/feel I’m right!” is an all too common argument. I would like to add that the System 1/System 2 model, or another “two minds” model is very much simplified, though it is an understandable first step toward modeling human inconsistency.
Now, a bit of an aside: the general principle is that to understand how something works, you don’t study it in equilibrium, because you miss all the essential stuff. You don’t see the hidden gears because they are nearly exactly compensated by a different set of gears. I cannot emphasize enough how much this error is made when trying to understand the human mind. What we see is “normal”, which is basically tiny deviations from equilibrium. To learn anything interesting about the mind you study minds out of equilibrium, and, because the IRB will never authorize “taking a mind out of equilibrium” with psychoactive substances, torture and emotional abuse (can’t blame them!) one has to study the subjects in vivo.
Back to the topic at hand. “Two minds” is what we glean from “normal” people. Those whose minds fell apart expose many more (broken) parts to the outside world, full as well as fragmented, and one can observe extreme versions of akrasia in those living with multiple personalities. Similarly, for the case of Bayesian updating, the interesting situations are for example, where some updating is explicitly punished, like in a cult-like setting, and this forced self-inconsistency and cognitive dissonance due to being prevented from “updating the most important beliefs to update”.
Thanks for your comments. Yes, this chapter, like the previous two, glosses over a lot of details because my purpose isn’t to explain these topics in detail, but say just enough to sow the seeds of doubt about the certainty many people have. It’s a fine line to walk between oversimplification and giving too much detail. I probably still haven’t got it right! For example, I would kind of love it if I didn’t have to talk about Bayesians, but it just seemed the most straightforward way to contrast their ideal with our reality as humans. Maybe in later drafts I’ll find a better way to approach it.
In fact I think it is a bit misleading to talk about Bayesians this way. Bayesianism isn’t necessarily fully self-endorsing, so Bayesians can have self-trust issues too, and can get stuck in bad equilibria with themselves which resemble Akrasia. Indeed, the account of akrasia in Breakdown of Will still uses Bayesian rationality, although with a temporally inconsistent utility function.
It would seem (to me) less misleading to make the case that self-trust is a very general problem for rational agents, EG by sketching the Lobian obstacle, although I know you said you’re not super familiar with that stuff. But the general point is that using some epistemics or decision theory doesn’t imply endorsing it reflectively, similar to Godel’s point about the limits of logic. So “by default” you expect some disconnect; it doesn’t actually require a dual-process theory where there are two different systems conflicting. What a system reflectively endorses is already formally distinct from what it does.
Okay, I’ll try to look into it again. Thanks for the suggestion.
Yeah, that makes sense, I think that chapter accomplishes its purpose then.