So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it’s better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect the actual arrangement of the boxes.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
So you would never one-box unless the simulator did some sort of scan/simulation upon your brain?
I’d one-box when Omega had sufficient access to my source-code. It doesn’t have to be through scanning—Omega might just be a great face-reading psychologist.
But it’s better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
We’re in agreement. As we discussed, this only applies insofar as you can control the factors that lead you to be classified as a one-boxer or a two-boxer. You can alter neither demographic information nor past behavior. But when (and only when) one-boxing causes you to be derived as a one-boxer, you should obviously one box.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
Well, that’s true for this universe. I just assume we’re playing in any given universe, some of which include Omegas who can tell the future (which implies bidirectional causality) - since Psychohistorian3 started out with that sort of thought when I first commented.
Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.
So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it’s better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.
Your final decision never affects the actual arrangement of the boxes, but its causes do.
I’d one-box when Omega had sufficient access to my source-code. It doesn’t have to be through scanning—Omega might just be a great face-reading psychologist.
We’re in agreement. As we discussed, this only applies insofar as you can control the factors that lead you to be classified as a one-boxer or a two-boxer. You can alter neither demographic information nor past behavior. But when (and only when) one-boxing causes you to be derived as a one-boxer, you should obviously one box.
Well, that’s true for this universe. I just assume we’re playing in any given universe, some of which include Omegas who can tell the future (which implies bidirectional causality) - since Psychohistorian3 started out with that sort of thought when I first commented.
Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.
yes, we do agree on that.