Sure. If you augment the SL and Newcomb scenarios with extra details about what’s going on, those extra details can matter. So, e.g., if Omega just predicts that you’ll take two boxes iff you’ve usually said “I would take two boxes” then you should probably say “I would take one box” and then take two boxes. But this version of Newcomb is utterly incompatible with how the Newcomb problem is always presented: in a world where Omega was known to operate this way, Omega’s success rate would be approximately 0% rather than the near-100% that makes the problem actually interesting. (Because everyone would say “I would take one box” and then everyone would take two boxes.)
So your version of Newcomb designed to yield a two-boxing decision succeeds in yielding a two-boxing decision, but only by not actually being a version of Newcomb in anything but name.
I was not assuming a world where Omega was known to operate this way. I originally said that it matters how the choice got correlated with one-boxing, and this was an example. In order for it work, as you are pointing out, it has to be working without this mode of acting being known. in other words, suppose someone very wealthy comes forward and says that he is going to test the Newcomb problem in real life, and says that he will act as Omega. We don’t know what his method is, but it turns out that he has a statistically high rate of success. Now suppose you end up with insider knowledge that he is just judging based on a person’s past internet comments. It does not seem impossible that this could give a positive rate of success in the real world as long as it is unknown; presumably people who say they would one-box, would be more likely to actually one-box. (Example: Golden Balls was a game show involving the Prisoner’s dilemma. Before cooperating or defecting, the contestants were allowed to talk to each other for a certain period. People analyzing it afterwards determined that a person explicitly and directly saying “I will cooperate” had a 30% higher chance of actually cooperating; people who weren’t going to cooperate generally were vaguer about their intentions.) But once you end up with the insider knowledge, it makes sense to go around saying you will take only one box, and then take both anyway.
This happens because the correlation between your choice and the million is removed once you control for the past comments. The point of those examples was a correlation that you cannot separate between your choice and the million. For the Smoking Lesion to be equivalent, it has to be equally impossible to remove the correlation between your choice and the lesion, as I said in the long comment.
It does not seem impossible that this could give a positive rate of success in the real world
I don’t know about you, but for me to give serious consideration to one-boxing in a Newcomb situation the box-stuffer would need to have demonstrated something better than “a positive rate of success”. I agree that if I had insider knowledge that they were doing it by looking at people’s past internet comments then two-boxing would be rational, but I don’t think any advocates of one-boxing would disagree with that. The situation you’re describing just isn’t an actual Newcomb problem any more.
It seems very likely possible for a human to achieve, say, 75% success on both one-boxers and two-boxers, maybe not with such a simple rule, but certainly without an actual mind reading ability. If this is the case, then there must be plenty of one-boxers who would one-box against someone who was getting a 75% success rate, even if you aren’t one of them.
I don’t think any advocates of one-boxing would disagree with that. The situation you’re describing just isn’t an actual Newcomb problem any more.
I agree. That was the whole point. I was not trying to say that one-boxers would disagree, but that they would agree. The point is that to have an “actual Newcomb problem” your personal belief about whether you will get the million has to actually vary with your actual choice to take one or two boxes in the particular case; if your belief isn’t going to vary, even in the individual case, you will just take both boxes according to the argument, “I’ll get whatever I would have with one box, plus the thousand.”
I was simply saying that since Eliezer constructs the Smoking Lesion as a counterexample to EDT, we need to treat the “actual Smoking Lesion” in the same way: it is only the “actual Smoking Lesion problem” if your belief that you have the lesion is actually going to vary, depending on whether you choose to smoke or not.
I was not assuming a world where Omega was known to operate this way. I originally said that it matters how the choice got correlated with one-boxing, and this was an example. In order for it work, as you are pointing out, it has to be working without this mode of acting being known. in other words, suppose someone very wealthy comes forward and says that he is going to test the Newcomb problem in real life, and says that he will act as Omega. We don’t know what his method is, but it turns out that he has a statistically high rate of success. Now suppose you end up with insider knowledge that he is just judging based on a person’s past internet comments. It does not seem impossible that this could give a positive rate of success in the real world as long as it is unknown; presumably people who say they would one-box, would be more likely to actually one-box. (Example: Golden Balls was a game show involving the Prisoner’s dilemma. Before cooperating or defecting, the contestants were allowed to talk to each other for a certain period. People analyzing it afterwards determined that a person explicitly and directly saying “I will cooperate” had a 30% higher chance of actually cooperating; people who weren’t going to cooperate generally were vaguer about their intentions.) But once you end up with the insider knowledge, it makes sense to go around saying you will take only one box, and then take both anyway.
This happens because the correlation between your choice and the million is removed once you control for the past comments. The point of those examples was a correlation that you cannot separate between your choice and the million. For the Smoking Lesion to be equivalent, it has to be equally impossible to remove the correlation between your choice and the lesion, as I said in the long comment.
I don’t know about you, but for me to give serious consideration to one-boxing in a Newcomb situation the box-stuffer would need to have demonstrated something better than “a positive rate of success”. I agree that if I had insider knowledge that they were doing it by looking at people’s past internet comments then two-boxing would be rational, but I don’t think any advocates of one-boxing would disagree with that. The situation you’re describing just isn’t an actual Newcomb problem any more.
It seems very likely possible for a human to achieve, say, 75% success on both one-boxers and two-boxers, maybe not with such a simple rule, but certainly without an actual mind reading ability. If this is the case, then there must be plenty of one-boxers who would one-box against someone who was getting a 75% success rate, even if you aren’t one of them.
I agree. That was the whole point. I was not trying to say that one-boxers would disagree, but that they would agree. The point is that to have an “actual Newcomb problem” your personal belief about whether you will get the million has to actually vary with your actual choice to take one or two boxes in the particular case; if your belief isn’t going to vary, even in the individual case, you will just take both boxes according to the argument, “I’ll get whatever I would have with one box, plus the thousand.”
I was simply saying that since Eliezer constructs the Smoking Lesion as a counterexample to EDT, we need to treat the “actual Smoking Lesion” in the same way: it is only the “actual Smoking Lesion problem” if your belief that you have the lesion is actually going to vary, depending on whether you choose to smoke or not.