I don’t believe that possible worlds can trade with each other, and I don’t see anything in Counterfactual Mugging to persuade me of that.
Expectation maximization is based on a model in which you inhabit a world state, and you have a set (possibly infinite) of possible future world states, and a probability (or point on a probability distribution) attached to each one. If you have interactions between your possible future states, you’re just not representing them correctly. The most you can say is that you are using some different model. You can’t say there’s a problem with the model, unless you demonstrate a situation your model can handle better than the standard model.
To answer the counterfactual mugging: You keep your $100. Because the game is over. You can’t gain money in another branch by giving up the $100. This is not a Newcomb-like situation.
Please provide a counterargument if you vote this down.
Consider two alternative possible worlds, forking from a common worldline with equal 50% probability. In one world, an agent A develops, and in another, an agent B. Agent A can either achieve U1 A-utilons or U2 B-utilons, U2>U1 (if A chooses to get U2 B-utilons, it produces 0 A-utilons). Agent B can either achieve U1 B-utilons, or U2 A-utilons. If each of them only thinks about itself, the outcome is U1 for A and U1 for B, that is not very much. If instead each of them optimizes the other-utility, both get U2. If this causes any troubles, shift the perspective to the point before the fork, and calculate expected utility for these strategies: first one has U1/2 in both A-utility and B-utility, while the second gives U2/2 utility for both, which is better.
It’s more efficient for them to produce utility for the other, which maps directly on the concept of trade. Counterfactual mugging explores exactly the same conceptual problems that you could get trying to accept the argument above. If you accept counterfactual mugging, you should accept the deal above as well. Of course, both agents must be capable of telling whether the other counterfactual agent is going to abide by the deal, which is Omega’s powers in CM.
Strategy one has U1/2 in both A-utility and B-utility with the additional property that the utility is in the correct fork where it can be used (i.e. it truly exists).
Strategy two has U2/2 in both A-utilty and B-utility but the additional property that the utility produced is not going to be usable in the fork where it is produced (i.e. the actual utility is really U0/2 unless the utility can be traded for the opposite utility which is actually usable in the same fork).
Assuming that there is no possibility of trade (since you describe no method by which it is possible):
I don’t see a requirement for trade existing in the counterfactual mugging problem so I accept it.
Since the above deal requires the possibility of trade to actually gain USABLE utility (arguably the only nonzero kind assuming that [PersonalUse OR Trade = Usability]) and I don’t see the possibility for trade, I am justified in rejecting the above deal despite accepting the counterfactual deal.
Utility is not instrumental, not used for something else, utility is the (abstract) thing you try to maximize, caring of nothing else. It’s the measure of success, all consequences taken into account (and is not itself “physical”). As such, it doesn’t matter in what way (or “where”) utility gets “produced”. Knowing that might be useful for the purpose of computing utility, but not for the purpose of interpreting the resulting amount, since utility is the final interpretation of the situation, the only one that matters.
Now, it might be that you consider events in the counterfactual worlds not valuable, but then it interrupts my argument a step earlier than you did, it makes incorrect the statement that A’s actions can produce B-utility. It could be that A can’t produce B-utility, but it can’t be that A produces B-utility but it doesn’t matter for B.
Hence the second paragraph about counterfactual mugging: if you accept that events in the counterfactual world can confer value, then you should take this deal as well. And no matter whether you accept CM or not, if you consider the problem in advance, you want to precommit to counterfactual trade. And hence, it’s a reflectively consistent thing to do to accept counterfactual trade later as well.
Fair enough. I’m willing to rephrase my argument as A can’t produce B utility because there is no B present in the world.
Yes, I do want to pre-commit to a counter-factual trade in the mugging because that is the cost of obtaining access to an offer of high expected utility (see my real-world rephrasing here for a more intuitive example case).
In the current world-splitting case, I see no utility for me since the opposing fork cannot produce it so there is no point to me pre-committing.
Why do you believe that the counterfactual isn’t valuable? You wrote:
I’m willing to rephrase my argument as A can’t produce B utility because there is no B present in the world.
That B is not present is a given possible world is not in itself a valid reason to morally ignore that possible world (there could be valid reasons, but B’s absence is not one of them for most preferences that are not specifically designed to make this condition hold, and for human-like morality in particular). For example, people clearly care about the (actual) world where they’ve died (not present): you won’t trade a penny a day while you live for eternal torture to everyone after you die (while you should, if you don’t care about the world where you are not present).
My default is to assume that B utility cannot be produced in a different world UNLESS it is of utility in B’s world to produce the utility in another world. One method by which this is possible is trade between the two worlds (which was the source of my initial response).
Your assumption seems to be that B utility will always have value in a different world.
My default assumption is explicitly overridden for the case where I feel good (have utility in the world where I am present) when I care about the world where I am not present.
Your (assumed) blanket assumption has the counterexample that while I feel good when someone has sex with me in the world where I am present (alive), I do not feel good (I feel nothing—and am currently repulsed by the thought = NEGATIVE utility) when someone has sex with me in the world where I am dead (not present).
ACK. Wait a minute. I’m clearly confusing the action that produced B utility with B utility itself. Your problem formulation did explicitly include your assumption (which thereby makes it a premise).
OK. I think I now accept your argument so far. I have a vague feeling that you’ve carried the argument to places where the premise/assumption isn’t valid but that’s obviously the subject for another post.
(Interesting karma question. I’ve made a mistake. How interesting is that mistake to the community? In this case, I think that it was a non-obvious mistake (certainly for me without working it through ;-) that others have a reasonable probability of making on an interesting subject so it should be of interest. We’ll see whether the karma results validate my understanding.)
(Just to be sure, I expect this is exactly the point you’ve changed your mind about, so there is no need for me to argue.)
My default is to assume that B utility cannot be produced in a different world UNLESS it is of utility in B’s world to produce the utility in another world.
Does not compute. Utility can’t be “in given world” or “useful” or “useful from a given world”. Utility is a measure of stuff, not stuff itself. Measure has no location.
Your assumption seems to be that B utility will always have value in a different world.
Not if we interpret “utility” as meaning “valuable stuff”. It’s not generally correct that the same stuff is equally valuable in all possible worlds. If in worlds of both agents A and B we can produce stuff X and Y, it might well be that producing X is world A has more B-utility than producing Y in world A, but producing X in world B has less B-utility than producing Y in world B. At the same time, given amount of B-utility is equally valuable, no matter where the stuff measured so got produced.
You’re presenting a standard PD, only distributed across possible worlds. Doesn’t seem to be any difference between splitting into 2 possible worlds, and taking 2 prisoners into 2 different cells. So you would need to provide a solution, a mechanism for cooperation, that would also work for the PD. And you haven’t.
Don’t know what you mean by “accept counterfactual mugging”. Especially since I just said I don’t agree with your interpretation of it. I believe the counterfactual mugging is also just a rephrasing of the PD. You should keep the $100 unless you would cooperate in a one-shot PD. We all know that rational agents would do better by cooperating, but that doesn’t make it happen.
That was the answer to the original edition of your question, that asked what does counterfactual mugging has to do with the argument for trade between possible worlds. I presented more or less a direct reduction in the comment above.
I don’t believe that possible worlds can trade with each other, and I don’t see anything in Counterfactual Mugging to persuade me of that.
Expectation maximization is based on a model in which you inhabit a world state, and you have a set (possibly infinite) of possible future world states, and a probability (or point on a probability distribution) attached to each one. If you have interactions between your possible future states, you’re just not representing them correctly. The most you can say is that you are using some different model. You can’t say there’s a problem with the model, unless you demonstrate a situation your model can handle better than the standard model.
To answer the counterfactual mugging: You keep your $100. Because the game is over. You can’t gain money in another branch by giving up the $100. This is not a Newcomb-like situation.
Please provide a counterargument if you vote this down.
Consider two alternative possible worlds, forking from a common worldline with equal 50% probability. In one world, an agent A develops, and in another, an agent B. Agent A can either achieve U1 A-utilons or U2 B-utilons, U2>U1 (if A chooses to get U2 B-utilons, it produces 0 A-utilons). Agent B can either achieve U1 B-utilons, or U2 A-utilons. If each of them only thinks about itself, the outcome is U1 for A and U1 for B, that is not very much. If instead each of them optimizes the other-utility, both get U2. If this causes any troubles, shift the perspective to the point before the fork, and calculate expected utility for these strategies: first one has U1/2 in both A-utility and B-utility, while the second gives U2/2 utility for both, which is better.
It’s more efficient for them to produce utility for the other, which maps directly on the concept of trade. Counterfactual mugging explores exactly the same conceptual problems that you could get trying to accept the argument above. If you accept counterfactual mugging, you should accept the deal above as well. Of course, both agents must be capable of telling whether the other counterfactual agent is going to abide by the deal, which is Omega’s powers in CM.
Strategy one has U1/2 in both A-utility and B-utility with the additional property that the utility is in the correct fork where it can be used (i.e. it truly exists).
Strategy two has U2/2 in both A-utilty and B-utility but the additional property that the utility produced is not going to be usable in the fork where it is produced (i.e. the actual utility is really U0/2 unless the utility can be traded for the opposite utility which is actually usable in the same fork).
Assuming that there is no possibility of trade (since you describe no method by which it is possible):
I don’t see a requirement for trade existing in the counterfactual mugging problem so I accept it.
Since the above deal requires the possibility of trade to actually gain USABLE utility (arguably the only nonzero kind assuming that [PersonalUse OR Trade = Usability]) and I don’t see the possibility for trade, I am justified in rejecting the above deal despite accepting the counterfactual deal.
Utility is not instrumental, not used for something else, utility is the (abstract) thing you try to maximize, caring of nothing else. It’s the measure of success, all consequences taken into account (and is not itself “physical”). As such, it doesn’t matter in what way (or “where”) utility gets “produced”. Knowing that might be useful for the purpose of computing utility, but not for the purpose of interpreting the resulting amount, since utility is the final interpretation of the situation, the only one that matters.
Now, it might be that you consider events in the counterfactual worlds not valuable, but then it interrupts my argument a step earlier than you did, it makes incorrect the statement that A’s actions can produce B-utility. It could be that A can’t produce B-utility, but it can’t be that A produces B-utility but it doesn’t matter for B.
Hence the second paragraph about counterfactual mugging: if you accept that events in the counterfactual world can confer value, then you should take this deal as well. And no matter whether you accept CM or not, if you consider the problem in advance, you want to precommit to counterfactual trade. And hence, it’s a reflectively consistent thing to do to accept counterfactual trade later as well.
Fair enough. I’m willing to rephrase my argument as A can’t produce B utility because there is no B present in the world.
Yes, I do want to pre-commit to a counter-factual trade in the mugging because that is the cost of obtaining access to an offer of high expected utility (see my real-world rephrasing here for a more intuitive example case).
In the current world-splitting case, I see no utility for me since the opposing fork cannot produce it so there is no point to me pre-committing.
Why do you believe that the counterfactual isn’t valuable? You wrote:
That B is not present is a given possible world is not in itself a valid reason to morally ignore that possible world (there could be valid reasons, but B’s absence is not one of them for most preferences that are not specifically designed to make this condition hold, and for human-like morality in particular). For example, people clearly care about the (actual) world where they’ve died (not present): you won’t trade a penny a day while you live for eternal torture to everyone after you die (while you should, if you don’t care about the world where you are not present).
We seem to have differing assumptions:
My default is to assume that B utility cannot be produced in a different world UNLESS it is of utility in B’s world to produce the utility in another world. One method by which this is possible is trade between the two worlds (which was the source of my initial response).
Your assumption seems to be that B utility will always have value in a different world.
My default assumption is explicitly overridden for the case where I feel good (have utility in the world where I am present) when I care about the world where I am not present.
Your (assumed) blanket assumption has the counterexample that while I feel good when someone has sex with me in the world where I am present (alive), I do not feel good (I feel nothing—and am currently repulsed by the thought = NEGATIVE utility) when someone has sex with me in the world where I am dead (not present).
ACK. Wait a minute. I’m clearly confusing the action that produced B utility with B utility itself. Your problem formulation did explicitly include your assumption (which thereby makes it a premise).
OK. I think I now accept your argument so far. I have a vague feeling that you’ve carried the argument to places where the premise/assumption isn’t valid but that’s obviously the subject for another post.
(Interesting karma question. I’ve made a mistake. How interesting is that mistake to the community? In this case, I think that it was a non-obvious mistake (certainly for me without working it through ;-) that others have a reasonable probability of making on an interesting subject so it should be of interest. We’ll see whether the karma results validate my understanding.)
(Just to be sure, I expect this is exactly the point you’ve changed your mind about, so there is no need for me to argue.)
Does not compute. Utility can’t be “in given world” or “useful” or “useful from a given world”. Utility is a measure of stuff, not stuff itself. Measure has no location.
Not if we interpret “utility” as meaning “valuable stuff”. It’s not generally correct that the same stuff is equally valuable in all possible worlds. If in worlds of both agents A and B we can produce stuff X and Y, it might well be that producing X is world A has more B-utility than producing Y in world A, but producing X in world B has less B-utility than producing Y in world B. At the same time, given amount of B-utility is equally valuable, no matter where the stuff measured so got produced.
Yes. I agree fully with the above post.
But can certainly be location dependent. Measure doesn’t have to be translation invariant. Hyperbolic discounting, for instance.
You’re presenting a standard PD, only distributed across possible worlds. Doesn’t seem to be any difference between splitting into 2 possible worlds, and taking 2 prisoners into 2 different cells. So you would need to provide a solution, a mechanism for cooperation, that would also work for the PD. And you haven’t.
Don’t know what you mean by “accept counterfactual mugging”. Especially since I just said I don’t agree with your interpretation of it. I believe the counterfactual mugging is also just a rephrasing of the PD. You should keep the $100 unless you would cooperate in a one-shot PD. We all know that rational agents would do better by cooperating, but that doesn’t make it happen.
That was the answer to the original edition of your question, that asked what does counterfactual mugging has to do with the argument for trade between possible worlds. I presented more or less a direct reduction in the comment above.