Why bother predicting the counterfactual consequences of choosing A6 since you already “know” the EU is higher than A7 and all the other options?
Are you sure you’re not anthropomorphizing the decision procedure? If I actually run through the steps that it specifies in my head, I don’t see any place where it would say “why bother” or fail to do the prediction.
On the other hand, if you actually do see a decision process similar to your decision choose A6, then you know that A6 really does have EU higher than A7.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
In any case, you shouldn’t know wrong things at any point. The trick is to be able to consider what’s going on without assuming (knowing) that you result from an actual choice.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
This doesn’t seem right. You update quite fine, in the sense that you’d prefer a strategy where observing utility-maximizer choose X leads you to conclude that X is the highest-utility choice, in the sense that all the subsequent actions are chosen as if it’s so.
Are you sure you’re not anthropomorphizing the decision procedure? If I actually run through the steps that it specifies in my head, I don’t see any place where it would say “why bother” or fail to do the prediction.
No, in UDT1 you don’t update on outside computations like that. You just recompute the EU.
In any case, you shouldn’t know wrong things at any point. The trick is to be able to consider what’s going on without assuming (knowing) that you result from an actual choice.
This doesn’t seem right. You update quite fine, in the sense that you’d prefer a strategy where observing utility-maximizer choose X leads you to conclude that X is the highest-utility choice, in the sense that all the subsequent actions are chosen as if it’s so.