At a minimum, I can’t see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I’m being simulated away, but at that point the psychology becomes infinitely recursive. I’ll take my chances while the simulator puzzles that out.
I’m not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?
can’t see how two-boxing could be worse in terms of risk of being turned off.
Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren’t affected by my one/two-box choice, then “One-boxing would [..] risk getting me turned off [..] so I two-box” doesn’t make much sense.
You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I’m aware that I’m being simulated and not otherwise, but I don’t understand why I should expect that.
Does the existence of a simulation imply the existence of an outside entity being simulated?
To be honest, I’ve never quite understood what the difference is supposed to be between the phrases “existing in a simulation” and “existing”.
But regardless, my understanding of “If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation” had initially been something like “If Omega can perfectly model Dave’s mental processes in order to determine Dave’s likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I’m Dave, the odds are (if Omega exists and can do this stuff) that I’m in a simulation.”
All of which also implies that there’s an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn’t my only concern anyway..
I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation?
My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it’s cheating somehow—i.e. I’m a simulation and it has my source code.
This doesn’t require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.
At a minimum, I can’t see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I’m being simulated away, but at that point the psychology becomes infinitely recursive. I’ll take my chances while the simulator puzzles that out.
I’m not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?
Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren’t affected by my one/two-box choice, then “One-boxing would [..] risk getting me turned off [..] so I two-box” doesn’t make much sense.
You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I’m aware that I’m being simulated and not otherwise, but I don’t understand why I should expect that.
To be honest, I’ve never quite understood what the difference is supposed to be between the phrases “existing in a simulation” and “existing”.
But regardless, my understanding of “If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation” had initially been something like “If Omega can perfectly model Dave’s mental processes in order to determine Dave’s likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I’m Dave, the odds are (if Omega exists and can do this stuff) that I’m in a simulation.”
All of which also implies that there’s an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn’t my only concern anyway..
I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation?
My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it’s cheating somehow—i.e. I’m a simulation and it has my source code.
This doesn’t require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.
Ah, I see.
OK, thanks for clarifying.