I am afraid that the perverse incentives would be harmful here. The easy way to achieve perfect accuracy in predicting your own future action is to predict failure, and then fail intentionally.
Even if one does not consciously go so far, it could still be unconsciously tempting to predict slightly smaller probability of success, because you can always adjust the outcome downwards.
To avoid this effect completely, (as a hypothetical utility maximizer) you would have to care about your success infinitely more than about predicting correctly. In which case, why bother predicting?
I am afraid that the perverse incentives would be harmful here. The easy way to achieve perfect accuracy in predicting your own future action is to predict failure, and then fail intentionally.
Even if one does not consciously go so far, it could still be unconsciously tempting to predict slightly smaller probability of success, because you can always adjust the outcome downwards.
To avoid this effect completely, (as a hypothetical utility maximizer) you would have to care about your success infinitely more than about predicting correctly. In which case, why bother predicting?