In UDT, no Bayesian updating occurs, and in particular, you don’t update on the fact that you exist
To be honest, I am suspicious of both UDT and Modal Realism. I find that I simply do not care what happens in mathematically possible structures other than the world I am actually in; in a sense this validates Wei’s claim that
Updateless Decision Theory converts anthropic reasoning problems into ethical problems
since I do not care what happens in mathematically possible structures that are incompatible with what I have already observed about the real world around me, I may as well have updated on my own existence in this world anyway.
In the case where either theory T1 or T2 is true, I care about whichever world is actually real, so my intuition is that we should pay the $1, which causes me to believe that I implicitly reject SIA.
I find that I simply do not care what happens in mathematically possible structures other than the world I am actually in
I think this is where Counterfactual Mugging comes in. If you expect to encounter CM-like situations in the future, then you’d want your future self to care what happens in mathematically possible structures other than the world it is “actually in”, since that makes the current you better off.
UDT might be too alien, so that you can’t make yourself use it even if you want to (so your future self won’t give $100 to Omega no matter what the current you wants), but AI seems to be a good application for it.
in mathematically possible structures other than the world it is “actually in”
The Counterfactual Mugging previously discussed involved probabilities over mathematical facts, e.g. the value, in binary, of the nth digit of pi. If that digit does turn out to be a 0, the counterfactual mugging pays off only in a mathematically impossible structures.
The point of the comment you replied to was that “simply do not care what happens in mathematically possible structures other than the world I am actually in” may be true of SforSingularity, but consideration of Counterfactual Mugging shows that it shouldn’t be elevated to a general moral principle, and in fact he would prefer his own future self to not follow that. To make that point, I only need a version of CM with a physical coin.
The version of CM with a mathematical coin is trickier. But I think under UDT, since you don’t update on what Omega tells you about the coin result, you continue to think that both outcomes are possible. You only think something is mathematically impossible, if you come to that conclusion through your internal computations.
The version of CM with a mathematical coin is trickier. But I think under UDT, since you don’t update on what Omega tells you about the coin result, you continue to think that both outcomes are possible. You only think something is mathematically impossible, if you come to that conclusion through your internal computations.
You don’t “update” on your own mathematical computations either.
The data you construct or collect is about what you are, and by extension what your actions are and thus what is their effect, not about what is possible in the abstract (more precisely: what you could think possible in other situations). That’s the trick with mathematical uncertainty: since you can plan for situations that turn out to be impossible, you need to take that planning into account in other situations. This is what you do by factoring the impossible situations in the decision-making: accounting for your own planning for those situations, in situations where you don’t know them to be impossible.
I don’t get this either, sorry. Can you give an example where “You don’t “update” on your own mathematical computations either” makes sense?
Here’s how I see CM-with-math-coin goes in more detail. I think we should ask the question, suppose you think that Omega may come in a moment to CM you using the n-th bit of pi, what would you prefer your future self to do, assuming that you can compute n-th bit of pi, either now or later? If you can compute it now, clearly you’d prefer your future self to not give $100 to Omega if the bit is 0.
What if you can’t compute it now, but you can compute it later? In that case, you’d prefer your future self to not give $100 to Omega if it computes that the bit is 0. Because suppose the bit is 1, then Omega will simulate/predict your future self, and the simulated self will compute that the bit is 1 and give $100 to Omega, so Omega will reward you. And if the bit is 0, then Omega will not get $100 from you.
Since by “updating” on your own computation, you win both ways, I don’t see why you shouldn’t do it.
I assume you recast the problem this way: if the n-th bit of pi is 1, then Omega maybe gives you $10000, and if the bit is 0, then Omega asks for the $100.
If the bit is 1, Omega’s simulation of you can’t conclude that the bit is 0, because the bit is 1. Omega doesn’t compute what you’ll predict in reality, it computes what you would do if the bit was 0 (which it isn’t, in reality, in this case where it isn’t). And as you suggested, you decline to give away the $100 if the bit is 0, thus Omega’s simulation of counterfactual will say that you wouldn’t oblige, and you won’t get the $10000.
Of course from a physical point of view, (e.g. from the point of view of Many Worlds QM or the lower Tegmark levels) there are lots of human instances around in the multiverse, all thinking that their particular bit of the multiverse is “real”. Clearly, they cannot all be right. This is somewhat worrying; naive ideas about our little part of the universe being real, and the rest imaginary, are probably a “confusion”, so we end up (as Wei D says) having to turn our old-fashioned epistemological intuitions into ethical principles; principles such as “I only care about the world that I am actually in”, or we have to leave ourselves open to turning into madmen who do bizarre things to themselves for expected reward in other possible universes.
And formalizing “the universe I am actually in” may not be easy; unless we are omniscient, we cannot have enough data to pin down where exactly in the multiverse we are.
To be honest, I am suspicious of both UDT and Modal Realism. I find that I simply do not care what happens in mathematically possible structures other than the world I am actually in; in a sense this validates Wei’s claim that
since I do not care what happens in mathematically possible structures that are incompatible with what I have already observed about the real world around me, I may as well have updated on my own existence in this world anyway.
In the case where either theory T1 or T2 is true, I care about whichever world is actually real, so my intuition is that we should pay the $1, which causes me to believe that I implicitly reject SIA.
I think this is where Counterfactual Mugging comes in. If you expect to encounter CM-like situations in the future, then you’d want your future self to care what happens in mathematically possible structures other than the world it is “actually in”, since that makes the current you better off.
UDT might be too alien, so that you can’t make yourself use it even if you want to (so your future self won’t give $100 to Omega no matter what the current you wants), but AI seems to be a good application for it.
The Counterfactual Mugging previously discussed involved probabilities over mathematical facts, e.g. the value, in binary, of the nth digit of pi. If that digit does turn out to be a 0, the counterfactual mugging pays off only in a mathematically impossible structures.
Which seems to be no more impossible than normal counterfactuals, just more explicitly so.
The point of the comment you replied to was that “simply do not care what happens in mathematically possible structures other than the world I am actually in” may be true of SforSingularity, but consideration of Counterfactual Mugging shows that it shouldn’t be elevated to a general moral principle, and in fact he would prefer his own future self to not follow that. To make that point, I only need a version of CM with a physical coin.
The version of CM with a mathematical coin is trickier. But I think under UDT, since you don’t update on what Omega tells you about the coin result, you continue to think that both outcomes are possible. You only think something is mathematically impossible, if you come to that conclusion through your internal computations.
You don’t “update” on your own mathematical computations either.
The data you construct or collect is about what you are, and by extension what your actions are and thus what is their effect, not about what is possible in the abstract (more precisely: what you could think possible in other situations). That’s the trick with mathematical uncertainty: since you can plan for situations that turn out to be impossible, you need to take that planning into account in other situations. This is what you do by factoring the impossible situations in the decision-making: accounting for your own planning for those situations, in situations where you don’t know them to be impossible.
I don’t get this either, sorry. Can you give an example where “You don’t “update” on your own mathematical computations either” makes sense?
Here’s how I see CM-with-math-coin goes in more detail. I think we should ask the question, suppose you think that Omega may come in a moment to CM you using the n-th bit of pi, what would you prefer your future self to do, assuming that you can compute n-th bit of pi, either now or later? If you can compute it now, clearly you’d prefer your future self to not give $100 to Omega if the bit is 0.
What if you can’t compute it now, but you can compute it later? In that case, you’d prefer your future self to not give $100 to Omega if it computes that the bit is 0. Because suppose the bit is 1, then Omega will simulate/predict your future self, and the simulated self will compute that the bit is 1 and give $100 to Omega, so Omega will reward you. And if the bit is 0, then Omega will not get $100 from you.
Since by “updating” on your own computation, you win both ways, I don’t see why you shouldn’t do it.
(I converted this comment to a top-level post. See Counterfactual Mugging and Logical Uncertainty. A little bit is left here in the original notation, as a reply.)
I assume you recast the problem this way: if the n-th bit of pi is 1, then Omega maybe gives you $10000, and if the bit is 0, then Omega asks for the $100.
If the bit is 1, Omega’s simulation of you can’t conclude that the bit is 0, because the bit is 1. Omega doesn’t compute what you’ll predict in reality, it computes what you would do if the bit was 0 (which it isn’t, in reality, in this case where it isn’t). And as you suggested, you decline to give away the $100 if the bit is 0, thus Omega’s simulation of counterfactual will say that you wouldn’t oblige, and you won’t get the $10000.
Of course from a physical point of view, (e.g. from the point of view of Many Worlds QM or the lower Tegmark levels) there are lots of human instances around in the multiverse, all thinking that their particular bit of the multiverse is “real”. Clearly, they cannot all be right. This is somewhat worrying; naive ideas about our little part of the universe being real, and the rest imaginary, are probably a “confusion”, so we end up (as Wei D says) having to turn our old-fashioned epistemological intuitions into ethical principles; principles such as “I only care about the world that I am actually in”, or we have to leave ourselves open to turning into madmen who do bizarre things to themselves for expected reward in other possible universes.
And formalizing “the universe I am actually in” may not be easy; unless we are omniscient, we cannot have enough data to pin down where exactly in the multiverse we are.