I’m thinking of writing a full post on how to think about existential risk in a many worlds scenario. Maybe there are strategies for avoiding existential risk that only make sense if many worlds is true.
For example, if the odds of extinction are high, we could try increasing the variance in the types of mitigation strategies we pursue, so a greater fraction of alternative branches land on a winning strategy.
I’m looking for any prior work that considers this angle. Thanks for any references I can look at.
[Question] Is there work looking at the implications of many worlds QM for existential risk?
I’m thinking of writing a full post on how to think about existential risk in a many worlds scenario. Maybe there are strategies for avoiding existential risk that only make sense if many worlds is true.
For example, if the odds of extinction are high, we could try increasing the variance in the types of mitigation strategies we pursue, so a greater fraction of alternative branches land on a winning strategy.
I’m looking for any prior work that considers this angle. Thanks for any references I can look at.