To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
Beats me. Why does that matter?
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
So what do you do on counterfactual mugging, or Newcomb’s problem for that matter?
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.