There are various ways to defeat “choshi dori,” although the gentleman in question could certainly get the vast majority of randomly chosen people to fall for it. Whatever “free will” is, its probably more complicated than just taking Omega at its word. Perhaps Omega achieved his accuracy by a similar defeatable hack.
Omega claims to “open up the agent,” and my response is to try to “open up Omega,” to see what’s behind his prediction %.
Let’s try using your martial arts analogy. Consider the following:
You find yourself in a real world physical confrontation with a ninja who demands your wallet. You have seen this ninja fight several other ninjas, a pirate and a Jedi in turn and each time he used “choshi dori” upon them then proceeded to break both of their legs and take their wallet. What do you do?
Punch the ninja in the face.
Shout “I have free will!” and punch the ninja in the face.
Think “I want to open up the ninja and see how his choshi dori works” then try to punch the ninja in the face.
Toss your wallet to the ninja and then run away.
This isn’t a trick question. All the answers that either punch the ninja in the face or take two boxes are wrong. They leave you with two broken legs or an otherwise less desirable outcome.
Sometimes people fight a hypothetical because the hypothetical is problematic. I lean toward two-boxing in Newcomb’s problem, basically because I can’t not fight this hypothetical. My reasoning is more or less as follows. If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation. One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box. If I’m not a simulation, I don’t accept the possibility of Omega existing in the first place, so I two-box. Basically, I think Newcomb’s problem is not a particularly useful hypothetical, because I don’t see it as predictive of decision-making in other circumstances.
One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box.
It seems to me that if Omega concludes that you are aware that you are in a simulation based on the fact that you take one box then Omega is systematically wrong when reasoning about a broad class of agents that happens to include all the rational agents (and some others). This is rather a significant flaw in an Omega implementation.
Basically, I think Newcomb’s problem is not a particularly useful hypothetical, because I don’t see it as predictive of decision-making in other circumstances.
For agents with coherent decision making procedures it is equivalent to playing a Prisoner’s Dilemma against a clone of yourself. That is something that feels closer to a real world scenario for some people. It is similarly equivalent to Parfit’s Hitch-hiker when said hitch-hiker is at the ATM.
That’s why I don’t like Newcomb’s problem. In a prisoner’s dilemma with myself, I’d cooperate (I trust me to cooperate with myself). Throwing Omega in confuses this pointlessly. I suspect if people substituted “God” for “Omega” I’d get more sympathy on this.
Are you suggesting that if you are a simulation, two-boxing reduces your risk of being turned off? If not, I don’t understand your reasoning at all. If so, I guess I understand your reasoning from that point on (presumably you feel no particular loyalty to the entity you’re simulating?), but I don’t understand how you arrive at that point.
At a minimum, I can’t see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I’m being simulated away, but at that point the psychology becomes infinitely recursive. I’ll take my chances while the simulator puzzles that out.
I’m not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?
can’t see how two-boxing could be worse in terms of risk of being turned off.
Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren’t affected by my one/two-box choice, then “One-boxing would [..] risk getting me turned off [..] so I two-box” doesn’t make much sense.
You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I’m aware that I’m being simulated and not otherwise, but I don’t understand why I should expect that.
Does the existence of a simulation imply the existence of an outside entity being simulated?
To be honest, I’ve never quite understood what the difference is supposed to be between the phrases “existing in a simulation” and “existing”.
But regardless, my understanding of “If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation” had initially been something like “If Omega can perfectly model Dave’s mental processes in order to determine Dave’s likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I’m Dave, the odds are (if Omega exists and can do this stuff) that I’m in a simulation.”
All of which also implies that there’s an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn’t my only concern anyway..
I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation?
My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it’s cheating somehow—i.e. I’m a simulation and it has my source code.
This doesn’t require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.
You are (merely) fighting the hypothetical.
Let’s try using your martial arts analogy. Consider the following:
You find yourself in a real world physical confrontation with a ninja who demands your wallet. You have seen this ninja fight several other ninjas, a pirate and a Jedi in turn and each time he used “choshi dori” upon them then proceeded to break both of their legs and take their wallet. What do you do?
Punch the ninja in the face.
Shout “I have free will!” and punch the ninja in the face.
Think “I want to open up the ninja and see how his choshi dori works” then try to punch the ninja in the face.
Toss your wallet to the ninja and then run away.
This isn’t a trick question. All the answers that either punch the ninja in the face or take two boxes are wrong. They leave you with two broken legs or an otherwise less desirable outcome.
Sometimes people fight a hypothetical because the hypothetical is problematic. I lean toward two-boxing in Newcomb’s problem, basically because I can’t not fight this hypothetical. My reasoning is more or less as follows. If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation. One-boxing would reveal that I know that and risk getting me turned off, making the money in the box rather beside the point, so I two-box. If I’m not a simulation, I don’t accept the possibility of Omega existing in the first place, so I two-box. Basically, I think Newcomb’s problem is not a particularly useful hypothetical, because I don’t see it as predictive of decision-making in other circumstances.
It seems to me that if Omega concludes that you are aware that you are in a simulation based on the fact that you take one box then Omega is systematically wrong when reasoning about a broad class of agents that happens to include all the rational agents (and some others). This is rather a significant flaw in an Omega implementation.
For agents with coherent decision making procedures it is equivalent to playing a Prisoner’s Dilemma against a clone of yourself. That is something that feels closer to a real world scenario for some people. It is similarly equivalent to Parfit’s Hitch-hiker when said hitch-hiker is at the ATM.
That’s why I don’t like Newcomb’s problem. In a prisoner’s dilemma with myself, I’d cooperate (I trust me to cooperate with myself). Throwing Omega in confuses this pointlessly. I suspect if people substituted “God” for “Omega” I’d get more sympathy on this.
Are you suggesting that if you are a simulation, two-boxing reduces your risk of being turned off?
If not, I don’t understand your reasoning at all.
If so, I guess I understand your reasoning from that point on (presumably you feel no particular loyalty to the entity you’re simulating?), but I don’t understand how you arrive at that point.
At a minimum, I can’t see how two-boxing could be worse in terms of risk of being turned off. I suppose Omega could think I was trying to be tricky by two-boxing specifically to avoid giving my awareness that I’m being simulated away, but at that point the psychology becomes infinitely recursive. I’ll take my chances while the simulator puzzles that out.
I’m not sure I understand your parenthetical. Does the existence of a simulation imply the existence of an outside entity being simulated?
Neither can I. Nor can I see how it could be better. In fact, I see no likely correlation between one/two-boxing and likelihood of being turned off at all. But if my chances of being turned off aren’t affected by my one/two-box choice, then “One-boxing would [..] risk getting me turned off [..] so I two-box” doesn’t make much sense.
You clearly have a scenario in mind wherein I get turned off if my simulator is aware that I’m aware that I’m being simulated and not otherwise, but I don’t understand why I should expect that.
To be honest, I’ve never quite understood what the difference is supposed to be between the phrases “existing in a simulation” and “existing”.
But regardless, my understanding of “If the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation” had initially been something like “If Omega can perfectly model Dave’s mental processes in order to determine Dave’s likely actions, then Omega will probably create lots of simulated Daves in the process. Since those simulated Daves will think they are Dave, and there are many more of them than there are of Dave, and I think I’m Dave, the odds are (if Omega exists and can do this stuff) that I’m in a simulation.”
All of which also implies that there’s an outside entity being simulated in this scenario, in which case if I feel loyalty to that entity (or otherwise have some basis for caring about how my choices affect it) then whether I get turned off or not isn’t my only concern anyway..
I infer from your question that I misunderstood you in the first place, though, in which case you can probably ignore my parenthetical. Let me back up and ask, instead, why if the being claiming to be Omega actually exists and can in fact instantly model my mental processes, then I’m almost certainly a simulation?
My thinking here is that if a being suddenly shows up and can perfectly model me, despite not having scanned my neural pathways, taken any tissue samples, observed my life history, or gathered any other data whatsoever, then it’s cheating somehow—i.e. I’m a simulation and it has my source code.
This doesn’t require there to be a more real Prismattic one turtle down, as it were. I could be a simulation created to test a set of parameters, not necessarily a model of another entity.
Ah, I see.
OK, thanks for clarifying.