Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I’m glad Egan’s at least looked at it a bit).
This gets us a contradiction, not a refutation, and one man’s modus ponens is another man’s modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?
Can we use this to argue for a flaw in the original simulation argument?
I don’t think so since it’s reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense.
But it’s still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s.
The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.
Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I’m glad Egan’s at least looked at it a bit).
This gets us a contradiction, not a refutation, and one man’s modus ponens is another man’s modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?
I don’t think so since it’s reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense.
But it’s still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s.
The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.
I’d like to see that argument. If you can find a mapping that doesn’t end up encoding the simulation in the mapping, I’d be surprised.
Well why should it matter if the simulation is encoded in the mapping?
If it is, that screens off any features of what it’s mapping; you can no longer be surprised that ‘random noise’ produces such output.
Again, so what?
Let me adjust the original thought experiment:
The operation first computer is encrypted using a very large one-time pad.