How is it not a simulation? [...] How do you define simulation in a way that [...]
I’m not sure how to answer either question, and I’m not sure anyone has a perfectly satisfactory definition of “simulation”. I’m pretty sure I’ve never seen one. But consider the following scenario.
Omega looks at the structure of your brain, and deduces a theorem of the following sort: For a broad class of individuals with these, and those, and these other, features, they will almost all make the one-box-or-two decision the same way. (The theorem doesn’t say which way. In fact, the actual form of the theorem is: “Given features A, B and C, any two individuals for which parameters X, Y and Z are within delta will go the same way with probability at least 1-epsilon”, and features A,B,C are found about as often in one-boxers as in two-boxers. Finding and proving such a theorem doesn’t in itself give Omega much information about whether you’re likely to one-box or two-box.) Omega then picks one of those individuals which isn’t you and simulates it; then Omega knows with high confidence which way you will choose.
Let’s suppose—since so far as I can see it makes no difference to the best arguments I know of either for one-boxing or for two-boxing—that Omega tells you all of the above. (But not, of course, which way the prediction ended up.)
In this scenario, what grounds are there for thinking that you could as easily be in Omega’s simulation as in the outside world? You know that the actual literal simulation was of someone else. You know that the theorem-proving was broad enough to cover a wide class of individuals besides yourself, including both one-boxers and two-boxers. So what’s going on here that’s a simulation you could be “in”?
(Technical note: If you take the form I gave for the theorem too literally, you’ll find that things couldn’t actually be quite as I described. Seeing why and figuring out how to patch it are left as exercises for the reader.)
How would such a device work?
There’s a wormhole between our present and our future. Omega looks through it and sees your lips move.
Omega then picks one of those individuals which isn’t you and simulates it; then Omega knows with high confidence which way you will choose.
Seems like Omega is simulating me for the purposes which matter. The “isn’t you” statement is not as trivial as it sounds, it requires a widely agreed upon definition of identity, something that gets blurred easily once you allow for human simulation. For example, how do you know you are not a part of the proof? How do you know that the statement that Omega tells you that it simulated “someone else” is even truth-evaluable? (And not, for example, a version of “this statement is false”.)
There’s a wormhole between our present and our future. Omega looks through it and sees your lips move.
Ah, Novikov’s self-consistent time loops. Given that there is no way to construct those using currently known physics, I tend to discard those. Given how they reduce all NP-hard problems to O(1) and such, making the world too boring to live in.
I’m not sure how to answer either question, and I’m not sure anyone has a perfectly satisfactory definition of “simulation”. I’m pretty sure I’ve never seen one. But consider the following scenario.
Omega looks at the structure of your brain, and deduces a theorem of the following sort: For a broad class of individuals with these, and those, and these other, features, they will almost all make the one-box-or-two decision the same way. (The theorem doesn’t say which way. In fact, the actual form of the theorem is: “Given features A, B and C, any two individuals for which parameters X, Y and Z are within delta will go the same way with probability at least 1-epsilon”, and features A,B,C are found about as often in one-boxers as in two-boxers. Finding and proving such a theorem doesn’t in itself give Omega much information about whether you’re likely to one-box or two-box.) Omega then picks one of those individuals which isn’t you and simulates it; then Omega knows with high confidence which way you will choose.
Let’s suppose—since so far as I can see it makes no difference to the best arguments I know of either for one-boxing or for two-boxing—that Omega tells you all of the above. (But not, of course, which way the prediction ended up.)
In this scenario, what grounds are there for thinking that you could as easily be in Omega’s simulation as in the outside world? You know that the actual literal simulation was of someone else. You know that the theorem-proving was broad enough to cover a wide class of individuals besides yourself, including both one-boxers and two-boxers. So what’s going on here that’s a simulation you could be “in”?
(Technical note: If you take the form I gave for the theorem too literally, you’ll find that things couldn’t actually be quite as I described. Seeing why and figuring out how to patch it are left as exercises for the reader.)
There’s a wormhole between our present and our future. Omega looks through it and sees your lips move.
Seems like Omega is simulating me for the purposes which matter. The “isn’t you” statement is not as trivial as it sounds, it requires a widely agreed upon definition of identity, something that gets blurred easily once you allow for human simulation. For example, how do you know you are not a part of the proof? How do you know that the statement that Omega tells you that it simulated “someone else” is even truth-evaluable? (And not, for example, a version of “this statement is false”.)
Ah, Novikov’s self-consistent time loops. Given that there is no way to construct those using currently known physics, I tend to discard those. Given how they reduce all NP-hard problems to O(1) and such, making the world too boring to live in.