Wait, you think I have the two-boxing gene? If that’s the case, one-boxing won’t help me; there’s no causal link between my choice and which gene I have, unlike standard Newcomb, in which there is a causal link between my choice and the contents of the box, given TDT’s definition of “causal link”.
The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.
OP here said (emphasis added)
A study shows that most people
Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.
In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.
Wrong; it’s perfectly possible to have the gene to one-box but two-box.
(If the facts were as stated in the OP, I’d actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)
Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.
So, as soon as it’s not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?
So, as soon as it’s not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?
You didn’t specify any numbers. If the actual number was 99.9%, I’d consider that strong evidence against some of my beliefs about the relationship between decisions and genes. I was implicitly assuming a slightly lower number (like 70ish area), which would be somewhat more compatible, and in which case I would expect to be part of that 30% (with greater than 30% probability).
If the number was, in fact, 99.9%, I’d have to assume that genes in general are far more related to specifics of how we think than I currently think, and it might be enough to make this an actual Newcomb’s problem. The mechanism for the equivalency Newcomb would be that it creates a causal link from my reaching an opinion to my having a certain gene, in TDT terms. Gene would be another word for “brain state”, as I’ve said elsewhere on this post.
This is confusing the issue. I would guess that the OP wrote “most” because Newcomb’s problem sometimes is put in such a way that the predictor is only right most of the time.
And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.
But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb’s problem and in this case.
And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.
Exactly; but since a vast majority of players won’t do this, Omega can still be right most of the time.
But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb’s problem and in this case.
Can you formulate that scenario, then, or point me to somewhere it’s been formulated? It would have to be a world with very different cognition than ours, if genes can determine choice 100% of the time; arguably, genes in that world would correspond to brain states in our world in a predictive sense, in which case this collapses to regular Newcomb, and I’d one-box.
The problem presented by the gene-scenario, as stated by OP, is
Now, how does this problem differ from the smoking lesion or Yudkowsky’s (2010, p.67) chewing gum problem?
However, as soon as you add in a 100% correlation, it becomes very different, because you have no possibility of certain outcomes. If the smoking lesion problem was also 100%, then I’d agree that you shouldn’t smoke, because whatever “gene” we’re talking about can be completely identified (in a sense) with my brain state that leads to my decision.
You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don’t actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
We could, but I’m not going to think about those unless the problem is stated a bit more precisely, so we don’t get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I’ve actually said elsewhere that if you didn’t know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?
(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)
I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.
Wait, you think I have the two-boxing gene? If that’s the case, one-boxing won’t help me; there’s no causal link between my choice and which gene I have, unlike standard Newcomb, in which there is a causal link between my choice and the contents of the box, given TDT’s definition of “causal link”.
Sure there is a link. The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.
In the standard Newcomb, if you one-box, then you had the disposition to one-box, and Omega put the million.
In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.
OP here said (emphasis added)
Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.
Wrong; it’s perfectly possible to have the gene to one-box but two-box.
(If the facts were as stated in the OP, I’d actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)
So, as soon as it’s not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?
You didn’t specify any numbers. If the actual number was 99.9%, I’d consider that strong evidence against some of my beliefs about the relationship between decisions and genes. I was implicitly assuming a slightly lower number (like 70ish area), which would be somewhat more compatible, and in which case I would expect to be part of that 30% (with greater than 30% probability).
If the number was, in fact, 99.9%, I’d have to assume that genes in general are far more related to specifics of how we think than I currently think, and it might be enough to make this an actual Newcomb’s problem. The mechanism for the equivalency Newcomb would be that it creates a causal link from my reaching an opinion to my having a certain gene, in TDT terms. Gene would be another word for “brain state”, as I’ve said elsewhere on this post.
This is confusing the issue. I would guess that the OP wrote “most” because Newcomb’s problem sometimes is put in such a way that the predictor is only right most of the time.
And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.
But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb’s problem and in this case.
Exactly; but since a vast majority of players won’t do this, Omega can still be right most of the time.
Can you formulate that scenario, then, or point me to somewhere it’s been formulated? It would have to be a world with very different cognition than ours, if genes can determine choice 100% of the time; arguably, genes in that world would correspond to brain states in our world in a predictive sense, in which case this collapses to regular Newcomb, and I’d one-box.
The problem presented by the gene-scenario, as stated by OP, is
However, as soon as you add in a 100% correlation, it becomes very different, because you have no possibility of certain outcomes. If the smoking lesion problem was also 100%, then I’d agree that you shouldn’t smoke, because whatever “gene” we’re talking about can be completely identified (in a sense) with my brain state that leads to my decision.
You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don’t actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.
The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn’t one.
We could, but I’m not going to think about those unless the problem is stated a bit more precisely, so we don’t get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I’ve actually said elsewhere that if you didn’t know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?
(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)
I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.