It’s just that, if I have the one-boxing gene, it will compel me (in some manner not stated in the problem) to use a decision algorithm which will cause me to one-box, and similarly for the two-box gene.
Ah, okay. Well, the idea of my scenario is that you have no idea how all of this works. So, for example, the two-boxing gene could make you be 100% sure that you have or don’t have the gene, so that two-boxing seems like the better decision. So, until you actually make a decision, you have no idea which gene you have. (Preliminary decisions, as in Eells tickle defense paper, are also irrelevant.) So, you have to make some kind of decision. The moment you one-box you can be pretty sure that you don’t have the two-boxing gene since it did not manage to trick into two-boxing, which it usually does. So, why not just one-box and take the money? :-)
My problem with all this is, if hypothetical-me’s decisionmaking process is made by genetics, why are you asking real-me what the decisionmaking process should be?
Real-me can come up with whatever logic and arguments, but hypothetical-me will ignore all that and choose by some other method.
(Traditional Newcomb is different, because in that case hypothetical-me can use the same decisionmaking process as real-me)
So, what if one day you learned that hypothetical-you is the actual-you, that is, what if Omega actually came up to you right now and told you about the study etc. and put you into the “genetic Newcomb problem”?
Hypothetical-me can use the same decisionmaking process as real-me also in genetic Newcomb, just as in the original. This simply means that the real you will stand for a hypothetical you which has the gene which makes you choose the thing that real you chooses, using the same decision process that the real you uses. Since you say you would two-box, that means the hypothetical you has the two-boxing gene.
I would one-box, and hypothetical me has the one-boxing gene.
I wasn’t assuming that I knew beforehand.
It’s just that, if I have the one-boxing gene, it will compel me (in some manner not stated in the problem) to use a decision algorithm which will cause me to one-box, and similarly for the two-box gene.
Ah, okay. Well, the idea of my scenario is that you have no idea how all of this works. So, for example, the two-boxing gene could make you be 100% sure that you have or don’t have the gene, so that two-boxing seems like the better decision. So, until you actually make a decision, you have no idea which gene you have. (Preliminary decisions, as in Eells tickle defense paper, are also irrelevant.) So, you have to make some kind of decision. The moment you one-box you can be pretty sure that you don’t have the two-boxing gene since it did not manage to trick into two-boxing, which it usually does. So, why not just one-box and take the money? :-)
My problem with all this is, if hypothetical-me’s decisionmaking process is made by genetics, why are you asking real-me what the decisionmaking process should be?
Real-me can come up with whatever logic and arguments, but hypothetical-me will ignore all that and choose by some other method.
(Traditional Newcomb is different, because in that case hypothetical-me can use the same decisionmaking process as real-me)
So, what if one day you learned that hypothetical-you is the actual-you, that is, what if Omega actually came up to you right now and told you about the study etc. and put you into the “genetic Newcomb problem”?
Well, I can say that I’d two-box.
Does that mean I have the two-boxing gene?
Hypothetical-me can use the same decisionmaking process as real-me also in genetic Newcomb, just as in the original. This simply means that the real you will stand for a hypothetical you which has the gene which makes you choose the thing that real you chooses, using the same decision process that the real you uses. Since you say you would two-box, that means the hypothetical you has the two-boxing gene.
I would one-box, and hypothetical me has the one-boxing gene.