This observation doesn’t seem to undermine the “wrong about what we want” view.
Suppose that your decisions are (imperfectly) optimized for A but you believe that you want B, and hence consciously optimize for B.
When considering a complex procedure which would get you a bunch of A next week, you reason “I want B, so why would I do something that gets me a bunch of A?” and don’t do it. You would only pursue such a complex procedure if you believed that you wanted A.
By contrast, given a simple way to get A you could do it without believing that you want to do it. So you do (after all, your decisions are optimized for A), but then believe that you have done something other than what you wanted to do.
Under these conditions it would be possible to get more of both A and B, by pursuing the efficient-but-delayed path to getting A and not pursuing the inefficient-but-immediate path. But in order to do that you would have to believe that you ought to.
That is to say, the question need not be “how to align actual preferences and believed preferences,” it could be “how do we organize a mutually beneficial trade?”
Of course there are other problems—for example, we aren’t very well optimized for A, and in particular aren’t great at looking far into the future. This seems very important, but I think that rationalists tend to significantly underestimate how well optimized we are for A (in part because we take at face value our beliefs about what we want, and observe that we are very poorly optimized for getting that).
This observation doesn’t seem to undermine the “wrong about what we want” view.
Suppose that your decisions are (imperfectly) optimized for A but you believe that you want B, and hence consciously optimize for B.
When considering a complex procedure which would get you a bunch of A next week, you reason “I want B, so why would I do something that gets me a bunch of A?” and don’t do it. You would only pursue such a complex procedure if you believed that you wanted A.
By contrast, given a simple way to get A you could do it without believing that you want to do it. So you do (after all, your decisions are optimized for A), but then believe that you have done something other than what you wanted to do.
Under these conditions it would be possible to get more of both A and B, by pursuing the efficient-but-delayed path to getting A and not pursuing the inefficient-but-immediate path. But in order to do that you would have to believe that you ought to.
That is to say, the question need not be “how to align actual preferences and believed preferences,” it could be “how do we organize a mutually beneficial trade?”
Of course there are other problems—for example, we aren’t very well optimized for A, and in particular aren’t great at looking far into the future. This seems very important, but I think that rationalists tend to significantly underestimate how well optimized we are for A (in part because we take at face value our beliefs about what we want, and observe that we are very poorly optimized for getting that).