So there’s no point in time where deciding “I should be the sort of person who pays out in a Counterfactual Mugging” has positive expected utility.
Sure, I agree.
What I’m suggesting is that “I should be the sort of person who does the thing that has positive expected utility” causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.
Is this not true?
“I decided that I would steal all your money if you hit the S key on your keyboard between 10:00-11:00 am on a Sunday, and you just did,”
I agree that this is not something I can sensibly protect against. I’m not actually sure I would call it a decision theory problem at all.
What I’m suggesting is that “I should be the sort of person who does the thing that has positive expected utility” causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.
In the inversion I suggested to the Counterfactual Mugging, your payout is determined on the basis of whether you pay up in the Counterfactual Mugging. In the Counterfactual Mugging, Omega predicts whether you would pay out in the Counterfactual Mugging, and if you would, you get a 50% shot at a million dollars. In the inverted scenario, Omega predicts whether you would pay out in the Counterfactual Mugging scenario, and if you wouldn’t, you get a shot at a million dollars.
Being the sort of person who would pay out in a Counterfactual Mugging only brings positive expected utility if you expect the Counterfactual Mugging scenario to be more likely than the inverted Counterfactual Mugging scenario.
The inverted Counterfactual Mugging scenario, like the case where Omega rewards or punishes you based on your keyboard usage, isn’t exactly a decision theory problem, in that once it arises, you don’t get to make a decision, but it doesn’t need to be.
When the question is “should I be the sort of person who pays out in a Counterfactual Mugging?” if the chance of it being helpful is balanced out by an equal chance of it being harmful, then it doesn’t matter whether the situations that balance it out require you to make decisions at all, only that the expected utilities balance.
If you take as a premise “Omega simply doesn’t do that sort of thing, it only provides decision theory dilemmas where the results are dependent on how you would respond in this particular dilemma,” then our probability distribution is no longer flat, and being the sort of person who pays out in a Counterfactual Mugging scenario becomes utility maximizing. But this isn’t a premise we can take for granted. Omega is already posited as an entity which can judge your decision algorithms perfectly, and imposes dilemmas which are highly arbitrary.
Sure, I agree.
What I’m suggesting is that “I should be the sort of person who does the thing that has positive expected utility” causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.
Is this not true?
I agree that this is not something I can sensibly protect against. I’m not actually sure I would call it a decision theory problem at all.
In the inversion I suggested to the Counterfactual Mugging, your payout is determined on the basis of whether you pay up in the Counterfactual Mugging. In the Counterfactual Mugging, Omega predicts whether you would pay out in the Counterfactual Mugging, and if you would, you get a 50% shot at a million dollars. In the inverted scenario, Omega predicts whether you would pay out in the Counterfactual Mugging scenario, and if you wouldn’t, you get a shot at a million dollars.
Being the sort of person who would pay out in a Counterfactual Mugging only brings positive expected utility if you expect the Counterfactual Mugging scenario to be more likely than the inverted Counterfactual Mugging scenario.
The inverted Counterfactual Mugging scenario, like the case where Omega rewards or punishes you based on your keyboard usage, isn’t exactly a decision theory problem, in that once it arises, you don’t get to make a decision, but it doesn’t need to be.
When the question is “should I be the sort of person who pays out in a Counterfactual Mugging?” if the chance of it being helpful is balanced out by an equal chance of it being harmful, then it doesn’t matter whether the situations that balance it out require you to make decisions at all, only that the expected utilities balance.
If you take as a premise “Omega simply doesn’t do that sort of thing, it only provides decision theory dilemmas where the results are dependent on how you would respond in this particular dilemma,” then our probability distribution is no longer flat, and being the sort of person who pays out in a Counterfactual Mugging scenario becomes utility maximizing. But this isn’t a premise we can take for granted. Omega is already posited as an entity which can judge your decision algorithms perfectly, and imposes dilemmas which are highly arbitrary.