The main idea of the article is that we likely live in a “Fermi simulation”, that is, a simulation which is created by aliens to solve Fermi paradox via simulating possible global risks, or in a “Singularity simulation”, – a simulation where future AI models its own origin (gaming simulations will also play with our period of history as a most interesting one). It means that our simulation will be likely turned off soon, as at least one of three conditions will be reached:
- Our simulation will model a global catastrophe, which is subjectively equal to the termination. - Our simulation will reach unknown to us goal (but likely related to our AI’s origin conditions) and will be terminated. - Our simulation will reach a nested simulation level shortly after strong AI creation, which will drastically increase computational resources demand and thus the simulation will be terminated.
We also suggested a classification of different types of simulations, patched the Simulation Argument and suggested Universal Simulation Argument which now has to take into account all possible civilisations in the universe. This makes SA’s branches where we are not in the simulations (“extinction soon” or “ethical future”) much less likely, as it would require that all possible civilisations will go extinct or all possible non-human civilisations will be ethical.
Simulation Typology and Termination Risks
Link post
The main idea of the article is that we likely live in a “Fermi simulation”, that is, a simulation which is created by aliens to solve Fermi paradox via simulating possible global risks, or in a “Singularity simulation”, – a simulation where future AI models its own origin (gaming simulations will also play with our period of history as a most interesting one). It means that our simulation will be likely turned off soon, as at least one of three conditions will be reached:
- Our simulation will model a global catastrophe, which is subjectively equal to the termination.
- Our simulation will reach unknown to us goal (but likely related to our AI’s origin conditions) and will be terminated.
- Our simulation will reach a nested simulation level shortly after strong AI creation, which will drastically increase computational resources demand and thus the simulation will be terminated.
We also suggested a classification of different types of simulations, patched the Simulation Argument and suggested Universal Simulation Argument which now has to take into account all possible civilisations in the universe. This makes SA’s branches where we are not in the simulations (“extinction soon” or “ethical future”) much less likely, as it would require that all possible civilisations will go extinct or all possible non-human civilisations will be ethical.