What do you mean by “simulated”? In Parfit’s hitchhiker scenario, as I understand it, the driver isn’t doing anything remotely like a full simulation of Alice; just understanding her thinking well enough to make a guess at whether he’ll get paid later. In which case, why should anyone think they are equally likely to be Alice or SimAlice?
[EDITED to add:] For that matter, in Newcomb’s problem it’s not generally stated explicitly that simulation is involved. Maybe Omega is just really good at proving theorems about people’s behaviour and can do it at a high enough level of abstraction that no simulation is needed. Or maybe Omega has a chronoscope and is just looking at the future, though it’s not obvious that this case doesn’t need treating differently.
So perhaps SCDT needs to be reformulated in terms of some principle to the effect that when someone or something might try to predict your actions you need to imagine yourself as being simulated. And I suspect that formalizing this ends up with it looking a lot like TDT.
Maybe Omega is just really good at proving theorems about people’s behaviour and can do it at a high enough level of abstraction that no simulation is needed.
How is it not a simulation? How do you define simulation in a way that includes manipulating bits (running a program) but excludes manipulating strings according to a set of rules (proving theorems)?
Or maybe Omega has a chronoscope and is just looking at the future
This is basically isomorphic to simulation, only now Omega is included in it. How would such a device work? The opaque box is full/empty before you make your choice, so the chronotron can either run two timelines (full/empty) and then terminate the one where its guess was wrong, or run one where Omega does not place anything, look into the future, then, if needed, go back and rerun the events with the box filled, again terminating the original timeline if the guess is incorrect.
In both cases there is a “simulation” going on, only the chronotron is acting as a super-Omega.
How is it not a simulation? [...] How do you define simulation in a way that [...]
I’m not sure how to answer either question, and I’m not sure anyone has a perfectly satisfactory definition of “simulation”. I’m pretty sure I’ve never seen one. But consider the following scenario.
Omega looks at the structure of your brain, and deduces a theorem of the following sort: For a broad class of individuals with these, and those, and these other, features, they will almost all make the one-box-or-two decision the same way. (The theorem doesn’t say which way. In fact, the actual form of the theorem is: “Given features A, B and C, any two individuals for which parameters X, Y and Z are within delta will go the same way with probability at least 1-epsilon”, and features A,B,C are found about as often in one-boxers as in two-boxers. Finding and proving such a theorem doesn’t in itself give Omega much information about whether you’re likely to one-box or two-box.) Omega then picks one of those individuals which isn’t you and simulates it; then Omega knows with high confidence which way you will choose.
Let’s suppose—since so far as I can see it makes no difference to the best arguments I know of either for one-boxing or for two-boxing—that Omega tells you all of the above. (But not, of course, which way the prediction ended up.)
In this scenario, what grounds are there for thinking that you could as easily be in Omega’s simulation as in the outside world? You know that the actual literal simulation was of someone else. You know that the theorem-proving was broad enough to cover a wide class of individuals besides yourself, including both one-boxers and two-boxers. So what’s going on here that’s a simulation you could be “in”?
(Technical note: If you take the form I gave for the theorem too literally, you’ll find that things couldn’t actually be quite as I described. Seeing why and figuring out how to patch it are left as exercises for the reader.)
How would such a device work?
There’s a wormhole between our present and our future. Omega looks through it and sees your lips move.
Omega then picks one of those individuals which isn’t you and simulates it; then Omega knows with high confidence which way you will choose.
Seems like Omega is simulating me for the purposes which matter. The “isn’t you” statement is not as trivial as it sounds, it requires a widely agreed upon definition of identity, something that gets blurred easily once you allow for human simulation. For example, how do you know you are not a part of the proof? How do you know that the statement that Omega tells you that it simulated “someone else” is even truth-evaluable? (And not, for example, a version of “this statement is false”.)
There’s a wormhole between our present and our future. Omega looks through it and sees your lips move.
Ah, Novikov’s self-consistent time loops. Given that there is no way to construct those using currently known physics, I tend to discard those. Given how they reduce all NP-hard problems to O(1) and such, making the world too boring to live in.
In a less convenient universe, Omega tells you that it has discovered that someone’s zodiac sign predicts their response to Newcomb’s problem with 90% accuracy. What do you do?
You don’t need any of that magic for imperfect predictors, people are pretty predictable as it is, just not perfectly so. And the effort required to improve the prediction accuracy diverges as the accuracy goes to certainty.
What do you mean by “simulated”? In Parfit’s hitchhiker scenario, as I understand it, the driver isn’t doing anything remotely like a full simulation of Alice; just understanding her thinking well enough to make a guess at whether he’ll get paid later. In which case, why should anyone think they are equally likely to be Alice or SimAlice?
[EDITED to add:] For that matter, in Newcomb’s problem it’s not generally stated explicitly that simulation is involved. Maybe Omega is just really good at proving theorems about people’s behaviour and can do it at a high enough level of abstraction that no simulation is needed. Or maybe Omega has a chronoscope and is just looking at the future, though it’s not obvious that this case doesn’t need treating differently.
So perhaps SCDT needs to be reformulated in terms of some principle to the effect that when someone or something might try to predict your actions you need to imagine yourself as being simulated. And I suspect that formalizing this ends up with it looking a lot like TDT.
How is it not a simulation? How do you define simulation in a way that includes manipulating bits (running a program) but excludes manipulating strings according to a set of rules (proving theorems)?
This is basically isomorphic to simulation, only now Omega is included in it. How would such a device work? The opaque box is full/empty before you make your choice, so the chronotron can either run two timelines (full/empty) and then terminate the one where its guess was wrong, or run one where Omega does not place anything, look into the future, then, if needed, go back and rerun the events with the box filled, again terminating the original timeline if the guess is incorrect.
In both cases there is a “simulation” going on, only the chronotron is acting as a super-Omega.
I’m not sure how to answer either question, and I’m not sure anyone has a perfectly satisfactory definition of “simulation”. I’m pretty sure I’ve never seen one. But consider the following scenario.
Omega looks at the structure of your brain, and deduces a theorem of the following sort: For a broad class of individuals with these, and those, and these other, features, they will almost all make the one-box-or-two decision the same way. (The theorem doesn’t say which way. In fact, the actual form of the theorem is: “Given features A, B and C, any two individuals for which parameters X, Y and Z are within delta will go the same way with probability at least 1-epsilon”, and features A,B,C are found about as often in one-boxers as in two-boxers. Finding and proving such a theorem doesn’t in itself give Omega much information about whether you’re likely to one-box or two-box.) Omega then picks one of those individuals which isn’t you and simulates it; then Omega knows with high confidence which way you will choose.
Let’s suppose—since so far as I can see it makes no difference to the best arguments I know of either for one-boxing or for two-boxing—that Omega tells you all of the above. (But not, of course, which way the prediction ended up.)
In this scenario, what grounds are there for thinking that you could as easily be in Omega’s simulation as in the outside world? You know that the actual literal simulation was of someone else. You know that the theorem-proving was broad enough to cover a wide class of individuals besides yourself, including both one-boxers and two-boxers. So what’s going on here that’s a simulation you could be “in”?
(Technical note: If you take the form I gave for the theorem too literally, you’ll find that things couldn’t actually be quite as I described. Seeing why and figuring out how to patch it are left as exercises for the reader.)
There’s a wormhole between our present and our future. Omega looks through it and sees your lips move.
Seems like Omega is simulating me for the purposes which matter. The “isn’t you” statement is not as trivial as it sounds, it requires a widely agreed upon definition of identity, something that gets blurred easily once you allow for human simulation. For example, how do you know you are not a part of the proof? How do you know that the statement that Omega tells you that it simulated “someone else” is even truth-evaluable? (And not, for example, a version of “this statement is false”.)
Ah, Novikov’s self-consistent time loops. Given that there is no way to construct those using currently known physics, I tend to discard those. Given how they reduce all NP-hard problems to O(1) and such, making the world too boring to live in.
In a less convenient universe, Omega tells you that it has discovered that someone’s zodiac sign predicts their response to Newcomb’s problem with 90% accuracy. What do you do?
You don’t need any of that magic for imperfect predictors, people are pretty predictable as it is, just not perfectly so. And the effort required to improve the prediction accuracy diverges as the accuracy goes to certainty.
To clarify, I don’t see why SCDT would one-box in the zodiac setting.
Why do you stipulate 90% accuracy, not 100%?
Because 100% accuracy would just be weird.