Given that the state-of-the-art literature on AI containment answers in the affirmative (AI is uncontainable in the long-term)
I don’t think that’s established in any general sense. AI is unlikely to be contained by the simulations we expect to use, but those are VERY permeable, by design, in order to get pretty course-grained value out of the AI. Nobody’s considered a simulation/container with literally zero input/feedback back from the “real” world.
If we’re in a simulation, it seems to be fairly self-consistent, with no ongoing interference or feedback from the containing environment.
I don’t think that’s established in any general sense. AI is unlikely to be contained by the simulations we expect to use, but those are VERY permeable, by design, in order to get pretty course-grained value out of the AI. Nobody’s considered a simulation/container with literally zero input/feedback back from the “real” world.
If we’re in a simulation, it seems to be fairly self-consistent, with no ongoing interference or feedback from the containing environment.