I am fine with your iterated mugging? “Forever” without discount implies infinities which is not nice, but we can assume that I’ll live 50 years more, so I either need to pay 100*365*50 = 182500$ with delay or get 18250000$ with delay, which sounds like pretty good bet? The only issue I see here is necessity to suspend prior disbelief in such bets and reasonable doubt in honesty of person who does not warn in advance about future payments.
***
Note that betting is a sort of counterfactual mugging in itself: you agree to pay in case of one outcome conditional on payment in case of the other outcome. If resulting outcome is not win and there is no causal enforcement, then your decision to pay or not to pay is similar to decision to pay or not to pay in counterfactual mugging.
***
I think that the simplest way to ignore weird hypotheses is to invoke computing power.
Roughly: there are many worlds that require less computing power to consider which are also more probable than lizard world, so you redirect computing power from considering lizard world to get more utility.
Correspondingly, if lizard world is sufficiently similar to our world and has similar restrictions on computing power, it will likely not consider our world, reducing incentive to analyze it further.
Conversively, if we imagine the world with vastly more computing power than we have, we likely won’t be able to analyze it, which incentivizes such world to not try to influence our policies.
To finish, if we imagine the world where it is possible to actually build literal Solomonoff induction which is capable to consider all computable universes in finite time, such world is possibly sufficiently weird to justify considering all possible weird hypotheses, because such world probably has multiple AIXIs which have multiple different utility functions and different priors and run multiple computable copies of you in all sorts of weird circumstances, even if we don’t invoke any multiversal weirdness. (It probably sucks to be finite computable structure is such worlds.)
In the end, actual influence on our updateless decisions is limited by very narrow set of worlds, like “different outcomes of real analogs of Prisoner’s dilemma with people on the same planet with you” and we don’t need to worry about lizard worlds until we solve all our earthly problems and start to disassemble galaxies into computronium.
Multiple comments in one trenchcoat:
I am fine with your iterated mugging? “Forever” without discount implies infinities which is not nice, but we can assume that I’ll live 50 years more, so I either need to pay 100*365*50 = 182500$ with delay or get 18250000$ with delay, which sounds like pretty good bet? The only issue I see here is necessity to suspend prior disbelief in such bets and reasonable doubt in honesty of person who does not warn in advance about future payments.
***
Note that betting is a sort of counterfactual mugging in itself: you agree to pay in case of one outcome conditional on payment in case of the other outcome. If resulting outcome is not win and there is no causal enforcement, then your decision to pay or not to pay is similar to decision to pay or not to pay in counterfactual mugging.
***
I think that the simplest way to ignore weird hypotheses is to invoke computing power.
Roughly: there are many worlds that require less computing power to consider which are also more probable than lizard world, so you redirect computing power from considering lizard world to get more utility.
Correspondingly, if lizard world is sufficiently similar to our world and has similar restrictions on computing power, it will likely not consider our world, reducing incentive to analyze it further.
Conversively, if we imagine the world with vastly more computing power than we have, we likely won’t be able to analyze it, which incentivizes such world to not try to influence our policies.
To finish, if we imagine the world where it is possible to actually build literal Solomonoff induction which is capable to consider all computable universes in finite time, such world is possibly sufficiently weird to justify considering all possible weird hypotheses, because such world probably has multiple AIXIs which have multiple different utility functions and different priors and run multiple computable copies of you in all sorts of weird circumstances, even if we don’t invoke any multiversal weirdness. (It probably sucks to be finite computable structure is such worlds.)
In the end, actual influence on our updateless decisions is limited by very narrow set of worlds, like “different outcomes of real analogs of Prisoner’s dilemma with people on the same planet with you” and we don’t need to worry about lizard worlds until we solve all our earthly problems and start to disassemble galaxies into computronium.