I explored similar ideas in these two posts: Quantum Immortality: A Perspective if AI Doomers are Probably Right—Here is the idea that only good outcomes with a large number of observers matter and I am more likely now to be in a timeline which will bring me into the future with a large number of observers because of some interpretation of SIA.
I explored similar ideas in these two posts:
Quantum Immortality: A Perspective if AI Doomers are Probably Right—Here is the idea that only good outcomes with a large number of observers matter and I am more likely now to be in a timeline which will bring me into the future with a large number of observers because of some interpretation of SIA.
and Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse Here I explored the idea that benevolent superintelligences will try to win measure war and aggregate as much measure as possible thus making bad outcomes anthropically irrelevant.
Simulation makes things interesting too. Bad situations might be simulated for learning purposes