Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
So what do you do on counterfactual mugging, or Newcomb’s problem for that matter?
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.