I’m not sure. The simplest way that more copies of me could exist is that the universe is larger, which doesn’t imply any crazy actions, except possible to bet that the universe is large/infinite. That isn’t a huge bullet to bite. From there you could probably get even more weight if you thought that copies of you were more densely distributed, or something like that, but I’m not sure what actions that would imply.
Speculation: The hypothesis that future civilisations spend all their resources simulating copies of you get a large update. However, if you contrast it with the hypothesis that they simulate all possible humans, and your prior probability that they would simulate you is proportional to the number of possible humans (by some principle of indifference), the update is proportional to the prior and is thus overwhelmed by the fact that it seems more interesting to simulate all humans than to simulate one of them over and over again.
Do you have any ideas of weird hypothesis that imply some specific actions?
Posit that physics allows a perpetuum mobile and the infinities make the bounded calculation break down and cry, as is common. If we by fiat disregard unbounded hypotheses: Also posit a Doomsday clock beyond the Towers of Hanoi, as specified by when some Turing machine halts. This breaks the calculation unless your complexity penalty assigner is uncomputable, even unspecifiable by possible laws of physics.
Sure, there are lots of ways to break calculations. That’s true for any theory that’s trying to calculate expected value, though, so I can’t see how that’s particularly relevant for anthropics, unless we have reason to believe that any of these situations should warrant some special action. Using anthropic decision theory you’re not even updating your probabilities based on number of copies, so it really is only calculating expected value.
It’s not true if potential value is bounded, which makes me sceptical that we should include a potentially unbounded term in how we weight hypotheses when we pick actions.
I’m not sure. The simplest way that more copies of me could exist is that the universe is larger, which doesn’t imply any crazy actions, except possible to bet that the universe is large/infinite. That isn’t a huge bullet to bite. From there you could probably get even more weight if you thought that copies of you were more densely distributed, or something like that, but I’m not sure what actions that would imply.
Speculation: The hypothesis that future civilisations spend all their resources simulating copies of you get a large update. However, if you contrast it with the hypothesis that they simulate all possible humans, and your prior probability that they would simulate you is proportional to the number of possible humans (by some principle of indifference), the update is proportional to the prior and is thus overwhelmed by the fact that it seems more interesting to simulate all humans than to simulate one of them over and over again.
Do you have any ideas of weird hypothesis that imply some specific actions?
Posit that physics allows a perpetuum mobile and the infinities make the bounded calculation break down and cry, as is common. If we by fiat disregard unbounded hypotheses: Also posit a Doomsday clock beyond the Towers of Hanoi, as specified by when some Turing machine halts. This breaks the calculation unless your complexity penalty assigner is uncomputable, even unspecifiable by possible laws of physics.
Sure, there are lots of ways to break calculations. That’s true for any theory that’s trying to calculate expected value, though, so I can’t see how that’s particularly relevant for anthropics, unless we have reason to believe that any of these situations should warrant some special action. Using anthropic decision theory you’re not even updating your probabilities based on number of copies, so it really is only calculating expected value.
It’s not true if potential value is bounded, which makes me sceptical that we should include a potentially unbounded term in how we weight hypotheses when we pick actions.