I have begun wondering whether claiming to be victim of “akrasia” might just be a way of admitting that your real preferences, as revealed in your actions, don’t match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).
This is an insufficient explanation. I have on many occasions found myself doing superficially enjoyable but instant-gratification, low effort activities that I actually enjoyed less than some other, delayed-gratification and/or higher effort activity.
But even aside from that, all that’s doing is renaming the problem. “How to fight akrasia” becomes “how to align actual preferences and believed preferences” and you’re no closer to a solution.
How to fight akrasia” becomes “how to align actual preferences and believed preferences” and you’re no closer to a solution.
On the contrary, you’re one step closer, in that you can now begin asking what your actual preferences are, and how to get them all met.
(Note that your “believed” preferences are also preferences; they just don’t necessarily have the same practical weight as whatever other preferences are interfering with them. The issue isn’t real vs. believed, it’s preference A vs. preference B, and resolving any perceived conflicts, by thinking through and discarding the cached thoughts built up around them.)
This observation doesn’t seem to undermine the “wrong about what we want” view.
Suppose that your decisions are (imperfectly) optimized for A but you believe that you want B, and hence consciously optimize for B.
When considering a complex procedure which would get you a bunch of A next week, you reason “I want B, so why would I do something that gets me a bunch of A?” and don’t do it. You would only pursue such a complex procedure if you believed that you wanted A.
By contrast, given a simple way to get A you could do it without believing that you want to do it. So you do (after all, your decisions are optimized for A), but then believe that you have done something other than what you wanted to do.
Under these conditions it would be possible to get more of both A and B, by pursuing the efficient-but-delayed path to getting A and not pursuing the inefficient-but-immediate path. But in order to do that you would have to believe that you ought to.
That is to say, the question need not be “how to align actual preferences and believed preferences,” it could be “how do we organize a mutually beneficial trade?”
Of course there are other problems—for example, we aren’t very well optimized for A, and in particular aren’t great at looking far into the future. This seems very important, but I think that rationalists tend to significantly underestimate how well optimized we are for A (in part because we take at face value our beliefs about what we want, and observe that we are very poorly optimized for getting that).
I have begun wondering whether claiming to be victim of “akrasia” might just be a way of admitting that your real preferences, as revealed in your actions, don’t match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).
This is an insufficient explanation. I have on many occasions found myself doing superficially enjoyable but instant-gratification, low effort activities that I actually enjoyed less than some other, delayed-gratification and/or higher effort activity.
But even aside from that, all that’s doing is renaming the problem. “How to fight akrasia” becomes “how to align actual preferences and believed preferences” and you’re no closer to a solution.
On the contrary, you’re one step closer, in that you can now begin asking what your actual preferences are, and how to get them all met.
(Note that your “believed” preferences are also preferences; they just don’t necessarily have the same practical weight as whatever other preferences are interfering with them. The issue isn’t real vs. believed, it’s preference A vs. preference B, and resolving any perceived conflicts, by thinking through and discarding the cached thoughts built up around them.)
This observation doesn’t seem to undermine the “wrong about what we want” view.
Suppose that your decisions are (imperfectly) optimized for A but you believe that you want B, and hence consciously optimize for B.
When considering a complex procedure which would get you a bunch of A next week, you reason “I want B, so why would I do something that gets me a bunch of A?” and don’t do it. You would only pursue such a complex procedure if you believed that you wanted A.
By contrast, given a simple way to get A you could do it without believing that you want to do it. So you do (after all, your decisions are optimized for A), but then believe that you have done something other than what you wanted to do.
Under these conditions it would be possible to get more of both A and B, by pursuing the efficient-but-delayed path to getting A and not pursuing the inefficient-but-immediate path. But in order to do that you would have to believe that you ought to.
That is to say, the question need not be “how to align actual preferences and believed preferences,” it could be “how do we organize a mutually beneficial trade?”
Of course there are other problems—for example, we aren’t very well optimized for A, and in particular aren’t great at looking far into the future. This seems very important, but I think that rationalists tend to significantly underestimate how well optimized we are for A (in part because we take at face value our beliefs about what we want, and observe that we are very poorly optimized for getting that).
That’s Bryan Caplan’s view. Seems quite plausible to me.
No.