Your point 2 seems to be about anthropics, not risk aversion. If you replace “destroying the world” with “kicking a cute puppy”, I become indifferent between indexical and logical coins. If it’s “destroying the world painfully for all involved”, I also get closer to being indifferent. Likewise if it’s “destroying the world instantly and painlessly”, but there’s a 1% indexical chance that the world will go on anyway. The difference only seems to matter when you imagine all your copies disappearing.
And even in that case, I’m not completely sure that I prefer the indexical coin. The “correct” multiverse theory might be one that includes logically inconsistent universes anyway (“Tegmark level 5″), so indexical and logical uncertainty become more similar to each other. That’s kinda the approach I took when trying to solve Counterfactual Mugging with a logical coin.
Your point 2 seems to be about anthropics, not risk aversion. If you replace “destroying the world” with “kicking a cute puppy”, I become indifferent between indexical and logical coins. If it’s “destroying the world painfully for all involved”, I also get closer to being indifferent. Likewise if it’s “destroying the world instantly and painlessly”, but there’s a 1% indexical chance that the world will go on anyway. The difference only seems to matter when you imagine all your copies disappearing.
And even in that case, I’m not completely sure that I prefer the indexical coin. The “correct” multiverse theory might be one that includes logically inconsistent universes anyway (“Tegmark level 5″), so indexical and logical uncertainty become more similar to each other. That’s kinda the approach I took when trying to solve Counterfactual Mugging with a logical coin.