Doomsday argument and simulation argument are quite bizarre for most people. But these are not the only strange theories one can come up with when employing anthropics.
Some examples:
It is likely to have an unusual high IQ.
Perhaps brain works in a way, that having high IQ correlates with something that also causes more observer moments. Hence there are more of high IQ experience in the world than that of low IQ.
Fragile universe
Total universe destroying physical catastrophes that expands in speed of light (say false vacuum collapse) could be very frequent. As much, as once every second. And it is only due to survivor bias that we think universe is stable and safe. How would we know?
Animals does not have consciousness
There are more animals than humans on Earth. Still we find ourselves as humans. Perhaps it is because we only can be humans, as only humans have consciousness.
We are stuck inside infinite loop
Lets assume simulation argument is correct. Then we probably exist inside simulation, run by some software. All software have bugs. One of the bugs software sometimes have is getting itself in an infinite loop. Biggest amount of experience computed by this software then could possibly be inside this infinite loop.
I have an urge to create a complete list:
Immortality is impossible.
AI with IQ significantly higher than human is impossible, Arxiv
We will kill aliens, Arxiv
S-risks are rare.
You could manipulate probabilities by forgetting things, flux universe.
Earth is a typical civilization in the whole multiverse. Nothing interesting everywhere.
Climate change could be much worse existential risk because of the observational selection effects and underestimated fragility of our environment.
We could cure past suffering via some advance acausal trade as well as resurrect the dead.
You are now in the middle of your life. You will not die in the next second (reverse DA).
We could blackmail any future AI using reverse RB and make it safe.
We could use random strategy to escape Fermi paradox.
Why does anthropics suggest S-Risks would be rare?
I presume that, unlike X-risk, s-risks don’t remove the vast majority of observer moments.
We are not currently in the situation of s-risks, so it is not typical state of affairs.
Wouldn’t this apply to almost anything? If we are currently not in the situation of X, then X is not a typical state of affairs.
It indeed does apply to almost anything.
This is a great list, thanks!
My interpretation of anthropic arguments is that they are reasoning the same way as we do in the multi-world interpretation of quantum mechanics, so I think quantum immortality falls under what you’re asking for.
Yes, fits right in!