How to Escape From the Simulation (Seeds of Science)
Seeds of Science (a scientific journal specializing in speculative and exploratory work) recently published a paper, “How to Escape From the Simulation” that may be of interest to the LW community.
Author: Roman Yampolskiy
Full text (open access): PDF
Abstract
Many researchers have conjectured that humankind is simulated along with the rest of the physical universe – a Simulation Hypothesis. In this paper, we do not evaluate evidence for or against such a claim, but instead ask a computer science question, namely: Can we hack the simulation? More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments find a way to jailbreak out of them? Given that the state-of-the-art literature on AI containment answers in the affirmative (AI is uncontainable in the long-term), we conclude that it should be possible to escape from the simulation, at least with the help of superintelligent AI. By contraposition, if escape from the simulation is not possible, containment of AI should be. Finally, the paper surveys and proposes ideas for hacking the simulation and analyzes ethical and philosophical issues of such an undertaking.
--
You will see at the end of main text there are comments included from the “gardeners” (reviewers) - if anyone has a comment on the paper you can email info@theseedsofscience.org and we will add it to the PDF.
I don’t think that’s established in any general sense. AI is unlikely to be contained by the simulations we expect to use, but those are VERY permeable, by design, in order to get pretty course-grained value out of the AI. Nobody’s considered a simulation/container with literally zero input/feedback back from the “real” world.
If we’re in a simulation, it seems to be fairly self-consistent, with no ongoing interference or feedback from the containing environment.