I have an urge to create a complete list:
Immortality is impossible.
AI with IQ significantly higher than human is impossible, Arxiv
We will kill aliens, Arxiv
S-risks are rare.
You could manipulate probabilities by forgetting things, flux universe.
Earth is a typical civilization in the whole multiverse. Nothing interesting everywhere.
Climate change could be much worse existential risk because of the observational selection effects and underestimated fragility of our environment.
We could cure past suffering via some advance acausal trade as well as resurrect the dead.
You are now in the middle of your life. You will not die in the next second (reverse DA).
We could blackmail any future AI using reverse RB and make it safe.
We could use random strategy to escape Fermi paradox.
Why does anthropics suggest S-Risks would be rare?
I presume that, unlike X-risk, s-risks don’t remove the vast majority of observer moments.
We are not currently in the situation of s-risks, so it is not typical state of affairs.
Wouldn’t this apply to almost anything? If we are currently not in the situation of X, then X is not a typical state of affairs.
It indeed does apply to almost anything.
This is a great list, thanks!
I have an urge to create a complete list:
Immortality is impossible.
AI with IQ significantly higher than human is impossible, Arxiv
We will kill aliens, Arxiv
S-risks are rare.
You could manipulate probabilities by forgetting things, flux universe.
Earth is a typical civilization in the whole multiverse. Nothing interesting everywhere.
Climate change could be much worse existential risk because of the observational selection effects and underestimated fragility of our environment.
We could cure past suffering via some advance acausal trade as well as resurrect the dead.
You are now in the middle of your life. You will not die in the next second (reverse DA).
We could blackmail any future AI using reverse RB and make it safe.
We could use random strategy to escape Fermi paradox.
Why does anthropics suggest S-Risks would be rare?
I presume that, unlike X-risk, s-risks don’t remove the vast majority of observer moments.
We are not currently in the situation of s-risks, so it is not typical state of affairs.
Wouldn’t this apply to almost anything? If we are currently not in the situation of X, then X is not a typical state of affairs.
It indeed does apply to almost anything.
This is a great list, thanks!