This roughly tracks what’s going on in our real beliefs, and why it seems absurd to us to infer that the world is a dream of a rational agent—why think that the agent will assign higher probability to the real world than the “right” prior? (The simulation argument is actually quite subtle, but I think that after all the dust clears this intuition is basically right.)
To the extent that we instincitively believe or disbelieve this, it’s not for the right reasons—natural selection didn’t have any evidence to go on. At most, that instinct is a useful workaround for the existential dread glitch.
Assume that there is a real prior (I like to call this programming language Celestial), and that it can be found from first principles and having an example universe to work with. Then I wouldn’t be surprised if we receive more weight indirectly than directly. After all:
Our laws of physics may be simple, but us seeing a night sky devoid of aliens suggests that it takes quite a few bits to locate us in time and space and improbability.
An anthropic bias would circumvent this, and agents living in the multiverse would be incentivized to implement it: The universes thereby promoted are particularly likely to themselves simulate the multiverse and act on what they see, and those are the only universes vulnerable to the agent’s attack.
Our universe may be particularly suited to simulate the multiverse in vulnerable ways, because of our quantum computers. All it takes is that we run a superposition of all programs, rely on a mathematical heuristic that tells us that almost all of the amplitudes cancel out, and get tricked by the agent employing the sort of paradox of self-reference that mathematical heuristics tend to be wrong on.
If the quirks of chaos theory don’t force the agent to simulate all of our universe to simulate any of it, then at least the only ones of us that have to worry about being simulated in detail in preparation of an attack are AI/AI safety researchers :P.
To the extent that we instincitively believe or disbelieve this, it’s not for the right reasons—natural selection didn’t have any evidence to go on. At most, that instinct is a useful workaround for the existential dread glitch.
To the extent that we believe this correctly, it’s for the same reasons that we are able to do math and philosophy correctly (or at least more correctly than chance :) despite natural selection not caring about it much. It’s the same reason that you can correctly make arguments like the one in your comment.
To the extent that we instincitively believe or disbelieve this, it’s not for the right reasons—natural selection didn’t have any evidence to go on. At most, that instinct is a useful workaround for the existential dread glitch.
Assume that there is a real prior (I like to call this programming language Celestial), and that it can be found from first principles and having an example universe to work with. Then I wouldn’t be surprised if we receive more weight indirectly than directly. After all:
Our laws of physics may be simple, but us seeing a night sky devoid of aliens suggests that it takes quite a few bits to locate us in time and space and improbability.
An anthropic bias would circumvent this, and agents living in the multiverse would be incentivized to implement it: The universes thereby promoted are particularly likely to themselves simulate the multiverse and act on what they see, and those are the only universes vulnerable to the agent’s attack.
Our universe may be particularly suited to simulate the multiverse in vulnerable ways, because of our quantum computers. All it takes is that we run a superposition of all programs, rely on a mathematical heuristic that tells us that almost all of the amplitudes cancel out, and get tricked by the agent employing the sort of paradox of self-reference that mathematical heuristics tend to be wrong on.
If the quirks of chaos theory don’t force the agent to simulate all of our universe to simulate any of it, then at least the only ones of us that have to worry about being simulated in detail in preparation of an attack are AI/AI safety researchers :P.
To the extent that we believe this correctly, it’s for the same reasons that we are able to do math and philosophy correctly (or at least more correctly than chance :) despite natural selection not caring about it much. It’s the same reason that you can correctly make arguments like the one in your comment.