The point of the comment you replied to was that “simply do not care what happens in mathematically possible structures other than the world I am actually in” may be true of SforSingularity, but consideration of Counterfactual Mugging shows that it shouldn’t be elevated to a general moral principle, and in fact he would prefer his own future self to not follow that. To make that point, I only need a version of CM with a physical coin.
The version of CM with a mathematical coin is trickier. But I think under UDT, since you don’t update on what Omega tells you about the coin result, you continue to think that both outcomes are possible. You only think something is mathematically impossible, if you come to that conclusion through your internal computations.
The version of CM with a mathematical coin is trickier. But I think under UDT, since you don’t update on what Omega tells you about the coin result, you continue to think that both outcomes are possible. You only think something is mathematically impossible, if you come to that conclusion through your internal computations.
You don’t “update” on your own mathematical computations either.
The data you construct or collect is about what you are, and by extension what your actions are and thus what is their effect, not about what is possible in the abstract (more precisely: what you could think possible in other situations). That’s the trick with mathematical uncertainty: since you can plan for situations that turn out to be impossible, you need to take that planning into account in other situations. This is what you do by factoring the impossible situations in the decision-making: accounting for your own planning for those situations, in situations where you don’t know them to be impossible.
I don’t get this either, sorry. Can you give an example where “You don’t “update” on your own mathematical computations either” makes sense?
Here’s how I see CM-with-math-coin goes in more detail. I think we should ask the question, suppose you think that Omega may come in a moment to CM you using the n-th bit of pi, what would you prefer your future self to do, assuming that you can compute n-th bit of pi, either now or later? If you can compute it now, clearly you’d prefer your future self to not give $100 to Omega if the bit is 0.
What if you can’t compute it now, but you can compute it later? In that case, you’d prefer your future self to not give $100 to Omega if it computes that the bit is 0. Because suppose the bit is 1, then Omega will simulate/predict your future self, and the simulated self will compute that the bit is 1 and give $100 to Omega, so Omega will reward you. And if the bit is 0, then Omega will not get $100 from you.
Since by “updating” on your own computation, you win both ways, I don’t see why you shouldn’t do it.
I assume you recast the problem this way: if the n-th bit of pi is 1, then Omega maybe gives you $10000, and if the bit is 0, then Omega asks for the $100.
If the bit is 1, Omega’s simulation of you can’t conclude that the bit is 0, because the bit is 1. Omega doesn’t compute what you’ll predict in reality, it computes what you would do if the bit was 0 (which it isn’t, in reality, in this case where it isn’t). And as you suggested, you decline to give away the $100 if the bit is 0, thus Omega’s simulation of counterfactual will say that you wouldn’t oblige, and you won’t get the $10000.
The point of the comment you replied to was that “simply do not care what happens in mathematically possible structures other than the world I am actually in” may be true of SforSingularity, but consideration of Counterfactual Mugging shows that it shouldn’t be elevated to a general moral principle, and in fact he would prefer his own future self to not follow that. To make that point, I only need a version of CM with a physical coin.
The version of CM with a mathematical coin is trickier. But I think under UDT, since you don’t update on what Omega tells you about the coin result, you continue to think that both outcomes are possible. You only think something is mathematically impossible, if you come to that conclusion through your internal computations.
You don’t “update” on your own mathematical computations either.
The data you construct or collect is about what you are, and by extension what your actions are and thus what is their effect, not about what is possible in the abstract (more precisely: what you could think possible in other situations). That’s the trick with mathematical uncertainty: since you can plan for situations that turn out to be impossible, you need to take that planning into account in other situations. This is what you do by factoring the impossible situations in the decision-making: accounting for your own planning for those situations, in situations where you don’t know them to be impossible.
I don’t get this either, sorry. Can you give an example where “You don’t “update” on your own mathematical computations either” makes sense?
Here’s how I see CM-with-math-coin goes in more detail. I think we should ask the question, suppose you think that Omega may come in a moment to CM you using the n-th bit of pi, what would you prefer your future self to do, assuming that you can compute n-th bit of pi, either now or later? If you can compute it now, clearly you’d prefer your future self to not give $100 to Omega if the bit is 0.
What if you can’t compute it now, but you can compute it later? In that case, you’d prefer your future self to not give $100 to Omega if it computes that the bit is 0. Because suppose the bit is 1, then Omega will simulate/predict your future self, and the simulated self will compute that the bit is 1 and give $100 to Omega, so Omega will reward you. And if the bit is 0, then Omega will not get $100 from you.
Since by “updating” on your own computation, you win both ways, I don’t see why you shouldn’t do it.
(I converted this comment to a top-level post. See Counterfactual Mugging and Logical Uncertainty. A little bit is left here in the original notation, as a reply.)
I assume you recast the problem this way: if the n-th bit of pi is 1, then Omega maybe gives you $10000, and if the bit is 0, then Omega asks for the $100.
If the bit is 1, Omega’s simulation of you can’t conclude that the bit is 0, because the bit is 1. Omega doesn’t compute what you’ll predict in reality, it computes what you would do if the bit was 0 (which it isn’t, in reality, in this case where it isn’t). And as you suggested, you decline to give away the $100 if the bit is 0, thus Omega’s simulation of counterfactual will say that you wouldn’t oblige, and you won’t get the $10000.