Oh, that’s fair. I was thinking of “you don’t know whether you’re real or a simulation” as an intuitive way to prove the case for all “conscious” simulations. It doesn’t have to be perfect—you could just as easily be an inaccurate simulation, with no way to know that you are a simulation and no way to know that you are inaccurate with respect to an original.
I was trying to get people to generalize downwards from the extreme intuitive example- Even with decreasing accuracy, as the simulation becomes so rough as to lose “consciousness” and “personhood”, the argument keeps holding.
Yeah, the argument would hold just as much with an inaccurate simulation as with an accurate one. The point I was trying to make wasn’t so much that the simulation isn’t going to be accurate enough, but that a simulation argument shouldn’t be a prerequisite to one-boxing. If the experiment were performed with human predictors (let’s say a psychologist who predicts correctly 75% of the time), one-boxing would still be rational despite knowing you’re not a simulation. I think LW relies on computationalism as a substitute for actually being reflectively consistent in problems such as these.
The trouble with real world examples is that we start introducing knowledge into the problem that we wouldn’t ideally have. The psychologist’s 75% success rate doesn’t necessarily apply to you—in the real world you can make a different estimate than the one that is given. If you’re an actor or a poker player, you’ll have a much different estimate of how things are going to work out.
Psychologists are just messier versions of brain scanners—the fundamental premise is that they are trying to access your source code.
And what’s more—suppose the predictions weren’t made by accessing your source code? The direction of causality does matter. If Omega can predict the future, the causal lines flow backwards from your choice to Omega’s past move. If Omega is scanning your brain, the causal lines go from your brain-state to Omega’s decision. If there are no causal lines between your brain/actions and Omega’s choice, you always two-box.
Real world example: what if I substituted your psychologist for a sociologist, who predicted you with above-chance accuracy using only your demographic factors? In this scenario, you aught to two-box—If you disagree, let me know and I can explain myself.
In the real world, you don’t know to what extent your psychologist is using sociology (or some other factor outside your control). People can’t always articulate why, but their intuition (correctly) begins to make them deviate from the given success% estimate as more of these real-world variables get introduced.
True, the 75% would merely be a past history (and I am in fact a poker player). Indeed, if the factors used were entirely or mostly comprised of factors beyond my control (and I knew this), I would two-box. However, two-boxing is not necessarily optimal because of a predictor whose prediction methods you do not know the mechanics of. In the limitedpredictor problem, the predictor doesn’t use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.
predictor doesn’t use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.
It’s worth pointing out that Newcomb’s problem always takes the form of Simpson’s paradox. The one boxers beat the two boxers as a whole, but among agents predicted to one-box, the two boxers win, and among agents predicted to two-box, the two boxers win.
The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect Omega’s prediction. The general rule is: “Try to make Omega think you’re one-boxing, but two-box whenever possible.” It’s just that in Newcomb’s problem proper, fulfilling the first imperative requires actually one-boxing.
So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it’s better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect the actual arrangement of the boxes.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
So you would never one-box unless the simulator did some sort of scan/simulation upon your brain?
I’d one-box when Omega had sufficient access to my source-code. It doesn’t have to be through scanning—Omega might just be a great face-reading psychologist.
But it’s better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
We’re in agreement. As we discussed, this only applies insofar as you can control the factors that lead you to be classified as a one-boxer or a two-boxer. You can alter neither demographic information nor past behavior. But when (and only when) one-boxing causes you to be derived as a one-boxer, you should obviously one box.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
Well, that’s true for this universe. I just assume we’re playing in any given universe, some of which include Omegas who can tell the future (which implies bidirectional causality) - since Psychohistorian3 started out with that sort of thought when I first commented.
Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.
Oh, that’s fair. I was thinking of “you don’t know whether you’re real or a simulation” as an intuitive way to prove the case for all “conscious” simulations. It doesn’t have to be perfect—you could just as easily be an inaccurate simulation, with no way to know that you are a simulation and no way to know that you are inaccurate with respect to an original.
I was trying to get people to generalize downwards from the extreme intuitive example- Even with decreasing accuracy, as the simulation becomes so rough as to lose “consciousness” and “personhood”, the argument keeps holding.
Yeah, the argument would hold just as much with an inaccurate simulation as with an accurate one. The point I was trying to make wasn’t so much that the simulation isn’t going to be accurate enough, but that a simulation argument shouldn’t be a prerequisite to one-boxing. If the experiment were performed with human predictors (let’s say a psychologist who predicts correctly 75% of the time), one-boxing would still be rational despite knowing you’re not a simulation. I think LW relies on computationalism as a substitute for actually being reflectively consistent in problems such as these.
The trouble with real world examples is that we start introducing knowledge into the problem that we wouldn’t ideally have. The psychologist’s 75% success rate doesn’t necessarily apply to you—in the real world you can make a different estimate than the one that is given. If you’re an actor or a poker player, you’ll have a much different estimate of how things are going to work out.
Psychologists are just messier versions of brain scanners—the fundamental premise is that they are trying to access your source code.
And what’s more—suppose the predictions weren’t made by accessing your source code? The direction of causality does matter. If Omega can predict the future, the causal lines flow backwards from your choice to Omega’s past move. If Omega is scanning your brain, the causal lines go from your brain-state to Omega’s decision. If there are no causal lines between your brain/actions and Omega’s choice, you always two-box.
Real world example: what if I substituted your psychologist for a sociologist, who predicted you with above-chance accuracy using only your demographic factors? In this scenario, you aught to two-box—If you disagree, let me know and I can explain myself.
In the real world, you don’t know to what extent your psychologist is using sociology (or some other factor outside your control). People can’t always articulate why, but their intuition (correctly) begins to make them deviate from the given success% estimate as more of these real-world variables get introduced.
True, the 75% would merely be a past history (and I am in fact a poker player). Indeed, if the factors used were entirely or mostly comprised of factors beyond my control (and I knew this), I would two-box. However, two-boxing is not necessarily optimal because of a predictor whose prediction methods you do not know the mechanics of. In the limited predictor problem, the predictor doesn’t use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.
agreed. To add on to this:
It’s worth pointing out that Newcomb’s problem always takes the form of Simpson’s paradox. The one boxers beat the two boxers as a whole, but among agents predicted to one-box, the two boxers win, and among agents predicted to two-box, the two boxers win.
The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect Omega’s prediction. The general rule is: “Try to make Omega think you’re one-boxing, but two-box whenever possible.” It’s just that in Newcomb’s problem proper, fulfilling the first imperative requires actually one-boxing.
So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it’s better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
I’d one-box when Omega had sufficient access to my source-code. It doesn’t have to be through scanning—Omega might just be a great face-reading psychologist.
We’re in agreement. As we discussed, this only applies insofar as you can control the factors that lead you to be classified as a one-boxer or a two-boxer. You can alter neither demographic information nor past behavior. But when (and only when) one-boxing causes you to be derived as a one-boxer, you should obviously one box.
Well, that’s true for this universe. I just assume we’re playing in any given universe, some of which include Omegas who can tell the future (which implies bidirectional causality) - since Psychohistorian3 started out with that sort of thought when I first commented.
Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.
yes, we do agree on that.