No, I don’t, since you have a time-turner. (To be clear, non-hypothetical-me wouldn’t hit non-hypothetical-you either.) I would also one-box if I thought that Omega’s predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don’t think that’s relevant when there’s no reverse causality.
Back to Newcomb’s problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I’d two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.
I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes… but doesn’t change the boxes.
Back to Newcomb’s problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I’d two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.
If the agent filling the boxes follows a consistent, predictable pattern you’re outside of, you can certainly use that information to do this. In Newcomb’s Problem though, Omega follows a consistent, predictable pattern you’re inside of. It’s logically inconsistent for you to two box and find they both contain money, or pick one box and find it’s empty.
I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes… but doesn’t change the boxes.
Why is whether your decision actually changes the boxes important to you? If you know that picking one box will result in your receiving a million dollars, and picking two boxes will result in getting a thousand dollars, do you have any concern that overrides making the choice that you expect to make you more money?
A decision process of “at all times, do whatever I expect to have the best results” will, at worst, reduce to exactly the same behavior as “at all times, do whatever I think will have a causal relationship with the best results.” In some cases, such as Newcomb’s problem, it has better results. What do you think the concern with causality actually does for you?
We don’t always agree here on what decision theories get the best results (as you can see by observing the offshoot of this conversation between Wedrifid and myself,) but what we do generally agree on here is that the quality of decision theories is determined by their results. If you argue yourself into a decision theory that doesn’t serve you well, you’ve only managed to shoot yourself in the foot.
Why is whether your decision actually changes the boxes important to you?
[....]
If you argue yourself into a decision theory that doesn’t serve you well, you’ve only managed to shoot yourself in the foot.
In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would—so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I’d made a precommitment… still two boxes.)
I think I’m going to back out of this discussion until I understand decision theory a bit better.
I think I’m going to back out of this discussion until I understand decision theory a bit better.
Feel free. You can revisit this conversation any time you feel like it. Discussion threads never really die here, there’s no community norm against replying to comments long after they’re posted.
No, I don’t, since you have a time-turner. (To be clear, non-hypothetical-me wouldn’t hit non-hypothetical-you either.) I would also one-box if I thought that Omega’s predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don’t think that’s relevant when there’s no reverse causality.
Back to Newcomb’s problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I’d two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.
I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes… but doesn’t change the boxes.
If the agent filling the boxes follows a consistent, predictable pattern you’re outside of, you can certainly use that information to do this. In Newcomb’s Problem though, Omega follows a consistent, predictable pattern you’re inside of. It’s logically inconsistent for you to two box and find they both contain money, or pick one box and find it’s empty.
Why is whether your decision actually changes the boxes important to you? If you know that picking one box will result in your receiving a million dollars, and picking two boxes will result in getting a thousand dollars, do you have any concern that overrides making the choice that you expect to make you more money?
A decision process of “at all times, do whatever I expect to have the best results” will, at worst, reduce to exactly the same behavior as “at all times, do whatever I think will have a causal relationship with the best results.” In some cases, such as Newcomb’s problem, it has better results. What do you think the concern with causality actually does for you?
We don’t always agree here on what decision theories get the best results (as you can see by observing the offshoot of this conversation between Wedrifid and myself,) but what we do generally agree on here is that the quality of decision theories is determined by their results. If you argue yourself into a decision theory that doesn’t serve you well, you’ve only managed to shoot yourself in the foot.
In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would—so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I’d made a precommitment… still two boxes.)
I think I’m going to back out of this discussion until I understand decision theory a bit better.
Feel free. You can revisit this conversation any time you feel like it. Discussion threads never really die here, there’s no community norm against replying to comments long after they’re posted.