I propose to call metacosmology the hypothetical field of study which would be concerned with the following questions:
Studying the space of simple mathematical laws which produce counterfactual universes with intelligent life.
Studying the distribution over utility-function-space (and, more generally, mindspace) of those counterfactual minds.
Studying the distribution of the amount of resources available to the counterfactual civilizations, and broad features of their development trajectories.
Using all of the above to produce a distribution over concretized simulation hypotheses.
This concept is of potential interest for several reasons:
It can be beneficial to actually research metacosmology, in order to draw practical conclusions. However, knowledge of metacosmology can pose an infohazard, and we would need to precommit not to accept blackmail from potential simulators.
The metacosmology knowledge of a superintelligent AI determines the extent to which it poses risk via the influence of potential simulators.
In principle, we might be able to use knowledge of metacosmology in order to engineer an “atheist prior” for the AI that would exclude simulation hypotheses. However, this might be very difficult in practice.
I propose to call metacosmology the hypothetical field of study which would be concerned with the following questions:
Studying the space of simple mathematical laws which produce counterfactual universes with intelligent life.
Studying the distribution over utility-function-space (and, more generally, mindspace) of those counterfactual minds.
Studying the distribution of the amount of resources available to the counterfactual civilizations, and broad features of their development trajectories.
Using all of the above to produce a distribution over concretized simulation hypotheses.
This concept is of potential interest for several reasons:
It can be beneficial to actually research metacosmology, in order to draw practical conclusions. However, knowledge of metacosmology can pose an infohazard, and we would need to precommit not to accept blackmail from potential simulators.
The metacosmology knowledge of a superintelligent AI determines the extent to which it poses risk via the influence of potential simulators.
In principle, we might be able to use knowledge of metacosmology in order to engineer an “atheist prior” for the AI that would exclude simulation hypotheses. However, this might be very difficult in practice.
Why do bad things happen to good people?