Adding to the metaphor here: suppose every day I, a Bayesian, am deciding what to do. I have some prior on what to do, which I update based on info I hear from a couple of sources, including my friend and the blogosphere. It seems that I should have some uncertainty over how reliable these sources are, such that if my friend keeps giving advice that in hindsight looks better than the advice I’m getting from the blogosphere, I update to thinking that my friend is more reliable than the blogosphere, and in future update more on my friend’s advice than on the blogosphere’s.
This means that if we take this sort of Bayesian theory of willpower seriously, it seems like you’re going to have ‘more willpower’ if in the past the stuff that your willpower advised you to do seemed ‘good’. Which sounds like the standard theory of “if being diligent pays off you’ll be more diligent” but isn’t: if your ‘willpower/explicit reasoning module’ says that X is a good idea and Y is a terrible idea, but other evidence comes in saying that Y will be great such that you end up doing Y anyway, and it sucks, you should have more willpower in the future. I guess the way this ends up not being what the Bayesian framework predicts is if what the evidence is actually for is the proposition “I will end up taking so-and-so action”—but that’s loopy enough that I at most want to call it quasi-Bayesian. Or I guess you could have an uninformative prior over evidence reliability, such that you don’t think past performance predicts future performance.
Adding to the metaphor here: suppose every day I, a Bayesian, am deciding what to do. I have some prior on what to do, which I update based on info I hear from a couple of sources, including my friend and the blogosphere. It seems that I should have some uncertainty over how reliable these sources are, such that if my friend keeps giving advice that in hindsight looks better than the advice I’m getting from the blogosphere, I update to thinking that my friend is more reliable than the blogosphere, and in future update more on my friend’s advice than on the blogosphere’s.
This means that if we take this sort of Bayesian theory of willpower seriously, it seems like you’re going to have ‘more willpower’ if in the past the stuff that your willpower advised you to do seemed ‘good’. Which sounds like the standard theory of “if being diligent pays off you’ll be more diligent” but isn’t: if your ‘willpower/explicit reasoning module’ says that X is a good idea and Y is a terrible idea, but other evidence comes in saying that Y will be great such that you end up doing Y anyway, and it sucks, you should have more willpower in the future. I guess the way this ends up not being what the Bayesian framework predicts is if what the evidence is actually for is the proposition “I will end up taking so-and-so action”—but that’s loopy enough that I at most want to call it quasi-Bayesian. Or I guess you could have an uninformative prior over evidence reliability, such that you don’t think past performance predicts future performance.