I think we need to remember here the difference between logical influence and causal influence?
My genes can cause me to be inclined towards smoking, and my genes can cause me to get lesions. If I choose to smoke, not knowing my genes, then that’s evidence for what my genes say, and it’s evidence about whether I’ll get lesions; but it doesn’t actually causally influence the matter.
My genes can incline me towards one-boxing, and can incline Omega towards putting $1M in the box. If I choose to two-box despite my inclinations, then that provides me with evidence about what Omega did, but it doesn’t causally influence the matter.
If I don’t know which of two worlds I’m in, I can’t increase the probability of one by saying “in world A, I’m more likely to do X than in world B, so I’m going to do X”. If nothing else, if I thought that worked, then I would do it whatever world I was in, and it would no longer be true.
In standard Newcomb, my inclination to one-box actually does make me one-box. In this version, my inclination to one-box is just a node that you’ve labelled “inclination to one-box”, and you’ve said that Omega cares about the node rather than about whether or not I one-box. But you’re still permitting me to two-box, so that node might just as well be “inclination to smoke”.
In the original Newcomb’s problem, am I allowed to say “in the world with the million, I am more likely to one-box than in the world without, so I’m going to one-box”? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true...
Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way.
And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.
In the original, you would say: “in the world where I one-box, the million is more likely to be there, so I’ll one-box”.
the one-boxing gene is responsible for me reasoning this way rather than another way.
If there’s a gene that makes you think black is white, then you’re going to get killed on the next zebra crossing. If there’s a gene that makes you misunderstand decision theory, you’re going to make some strange decisions. If Omega is fond of people with that gene, then lucky you. But if you don’t have the gene, then acting like you do won’t help you.
Another reframing: in this version, Omega checks to see if you have the photic sneeze reflex, then forces you to stare at a bright light and checks whether or not you sneeze. Ve gives you $1k if you don’t sneeze, and independently, $1M if you have the PSR gene.
If I can choose whether or not to sneeze, then I should not sneeze. Maybe the PSR gene makes it harder for me to not sneeze, in which case I can be really happy that I have to stifle the urge to sneeze, but I should still not sneeze.
But if the PSR gene just makes me sneeze, then why are we even asking whether I should sneeze or not?
I think this is addressed by my top level comment about determinism.
But if you don’t see how it applies, then imagine an AI reasoning like you have above.
“My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I’m lucky. But if he’s not, then acting like I have the kind of programming he likes isn’t going to help me. So why should I one-box? That would be acting like I had one-box programming. I’ll just take everything that is in both boxes, since it’s not up to me.”
Of course, when I examined the thing’s source code, I knew it would reason this way, and so I did not put the million.
So I think where we differ is that I don’t believe in a gene that controls my decision in the same way that you do. I don’t know how well I can articulate myself, but:
As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn’t responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I’ll do, then there are no worlds where Omega thinks I’ll one-box, but I actually two-box.
But imagine that all AIs have a constant variable in their source code, unhelpfully named TMP3. AIs with TMP3=true tend to one-box in Newcomblike problems, and AIs with TMP3=false tend to two-box. Omega decides whether to put in $1M by looking at TMP3.
(Does the problem still count as Newcomblike? I’m not sure that it does, so I don’t know if TMP3 correlates with my actions at all. But we can say that TMP3 correlates with how AIs act in GNP, instead.)
If I have access to my source code, I can find out whether I have TMP3=true or false. And regardless of which it is, I can two-box. (If I can’t choose to two-box, after learning that I have TMP3=true, then this isn’t me.) Since I can two-box without changing Omega’s decision, I should.
Whereas in the original Newcomb’s problem, I can look at my source code, and… maybe I can prove whether I one- or two-box. But if I can, that doesn’t constrain my decision so much as predict it, in the same way that Omega can; the prediction of “one-box” is going to take into account the fact that the arguments for one-boxing overwhelm the consideration of “I really want to two-box just to prove myself wrong”. More likely, I can’t prove anything. And I can one- or two-box, but Omega is going to predict me correctly, unlike in GNP, so I one-box.
The case where I don’t look at my source code is more complicated (maybe AIs with TMP3=true will never choose to look?), but I hope this at least illustrates why I don’t find the two comparable.
(That said, I might actually one-box, because I’m not sufficiently convinced of my reasoning.)
“I don’t believe in a gene that controls my decision” refers to reality, and of course I don’t believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.
As you note, if an AI could read its source code and sees that it says “one-box”, then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).
But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says “one-box”, then you could still two-box, so it couldn’t work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this “doesn’t’ constrain my decision so much as predict it”, i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases—causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.
I was referring to “in principle”, not to reality.
You believe that if you saw you had the gene that says “one-box”, then you could still two-box
Yes. I think that if I couldn’t do that, it wouldn’t be me. If we don’t permit people without the two-boxing gene to two-box (the question as originally written did, but we don’t have to), then this isn’t a game I can possibly be offered. You can’t take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it’s the wrong way, and say that I’m still making the decision. So again, we’re at the point where I don’t know why we’re asking the question. If not-me has the gene, he’ll do one thing; if not, he’ll do the other; and it doesn’t make a difference what he should do. We’re not talking about agents with free action, here.
Again, I’m not sure exactly how this extends to the case where an agent doesn’t know whether they have the gene.
What if we take the original Newcomb, then Omega puts the million in the box, and then tells you “I have predicted with 100% certainty that you are only going to take one box, so I put the million there?”
Could you two-box in that situation, or would that take away your freedom?
If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.
If you say you could not, why would that be you when the genetic case would not be?
Unless something happens out of the blue to force my decision—in which case it’s not my decision—then this situation doesn’t happen. There might be people for whom Omega can predict with 100% certainty that they’re going to one-box even after Omega has told them his prediction, but I’m not one of them.
(I’m assuming here that people get offered the game regardless of their decision algorithm. If Omega only makes the offer to people whom he can predict certainly, we’re closer to a counterfactual mugging. At any rate, it changes the game significantly.)
I agree that in reality it is often impossible to predict someone’s actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.
EDIT: You’re really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you’re arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can’t predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don’t know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega’s decision before you make your choice would allow you to two-box, which it would.
Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.
Of course, when I examined the thing’s source code, I knew it would reason this way, and so I did not put the million.
Then you’re talking about an evil decision problem. But neither in the original nor in the genetic Newcombe’s problem is your source code investigated.
I think we need to remember here the difference between logical influence and causal influence?
My genes can cause me to be inclined towards smoking, and my genes can cause me to get lesions. If I choose to smoke, not knowing my genes, then that’s evidence for what my genes say, and it’s evidence about whether I’ll get lesions; but it doesn’t actually causally influence the matter.
My genes can incline me towards one-boxing, and can incline Omega towards putting $1M in the box. If I choose to two-box despite my inclinations, then that provides me with evidence about what Omega did, but it doesn’t causally influence the matter.
If I don’t know which of two worlds I’m in, I can’t increase the probability of one by saying “in world A, I’m more likely to do X than in world B, so I’m going to do X”. If nothing else, if I thought that worked, then I would do it whatever world I was in, and it would no longer be true.
In standard Newcomb, my inclination to one-box actually does make me one-box. In this version, my inclination to one-box is just a node that you’ve labelled “inclination to one-box”, and you’ve said that Omega cares about the node rather than about whether or not I one-box. But you’re still permitting me to two-box, so that node might just as well be “inclination to smoke”.
In the original Newcomb’s problem, am I allowed to say “in the world with the million, I am more likely to one-box than in the world without, so I’m going to one-box”? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true...
Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way.
And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.
In the original, you would say: “in the world where I one-box, the million is more likely to be there, so I’ll one-box”.
If there’s a gene that makes you think black is white, then you’re going to get killed on the next zebra crossing. If there’s a gene that makes you misunderstand decision theory, you’re going to make some strange decisions. If Omega is fond of people with that gene, then lucky you. But if you don’t have the gene, then acting like you do won’t help you.
Another reframing: in this version, Omega checks to see if you have the photic sneeze reflex, then forces you to stare at a bright light and checks whether or not you sneeze. Ve gives you $1k if you don’t sneeze, and independently, $1M if you have the PSR gene.
If I can choose whether or not to sneeze, then I should not sneeze. Maybe the PSR gene makes it harder for me to not sneeze, in which case I can be really happy that I have to stifle the urge to sneeze, but I should still not sneeze.
But if the PSR gene just makes me sneeze, then why are we even asking whether I should sneeze or not?
I think this is addressed by my top level comment about determinism.
But if you don’t see how it applies, then imagine an AI reasoning like you have above.
“My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I’m lucky. But if he’s not, then acting like I have the kind of programming he likes isn’t going to help me. So why should I one-box? That would be acting like I had one-box programming. I’ll just take everything that is in both boxes, since it’s not up to me.”
Of course, when I examined the thing’s source code, I knew it would reason this way, and so I did not put the million.
So I think where we differ is that I don’t believe in a gene that controls my decision in the same way that you do. I don’t know how well I can articulate myself, but:
As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn’t responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I’ll do, then there are no worlds where Omega thinks I’ll one-box, but I actually two-box.
But imagine that all AIs have a constant variable in their source code, unhelpfully named TMP3. AIs with TMP3=true tend to one-box in Newcomblike problems, and AIs with TMP3=false tend to two-box. Omega decides whether to put in $1M by looking at TMP3.
(Does the problem still count as Newcomblike? I’m not sure that it does, so I don’t know if TMP3 correlates with my actions at all. But we can say that TMP3 correlates with how AIs act in GNP, instead.)
If I have access to my source code, I can find out whether I have TMP3=true or false. And regardless of which it is, I can two-box. (If I can’t choose to two-box, after learning that I have TMP3=true, then this isn’t me.) Since I can two-box without changing Omega’s decision, I should.
Whereas in the original Newcomb’s problem, I can look at my source code, and… maybe I can prove whether I one- or two-box. But if I can, that doesn’t constrain my decision so much as predict it, in the same way that Omega can; the prediction of “one-box” is going to take into account the fact that the arguments for one-boxing overwhelm the consideration of “I really want to two-box just to prove myself wrong”. More likely, I can’t prove anything. And I can one- or two-box, but Omega is going to predict me correctly, unlike in GNP, so I one-box.
The case where I don’t look at my source code is more complicated (maybe AIs with TMP3=true will never choose to look?), but I hope this at least illustrates why I don’t find the two comparable.
(That said, I might actually one-box, because I’m not sufficiently convinced of my reasoning.)
“I don’t believe in a gene that controls my decision” refers to reality, and of course I don’t believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.
As you note, if an AI could read its source code and sees that it says “one-box”, then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).
But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says “one-box”, then you could still two-box, so it couldn’t work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this “doesn’t’ constrain my decision so much as predict it”, i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases—causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.
I was referring to “in principle”, not to reality.
Yes. I think that if I couldn’t do that, it wouldn’t be me. If we don’t permit people without the two-boxing gene to two-box (the question as originally written did, but we don’t have to), then this isn’t a game I can possibly be offered. You can’t take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it’s the wrong way, and say that I’m still making the decision. So again, we’re at the point where I don’t know why we’re asking the question. If not-me has the gene, he’ll do one thing; if not, he’ll do the other; and it doesn’t make a difference what he should do. We’re not talking about agents with free action, here.
Again, I’m not sure exactly how this extends to the case where an agent doesn’t know whether they have the gene.
What if we take the original Newcomb, then Omega puts the million in the box, and then tells you “I have predicted with 100% certainty that you are only going to take one box, so I put the million there?”
Could you two-box in that situation, or would that take away your freedom?
If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.
If you say you could not, why would that be you when the genetic case would not be?
Unless something happens out of the blue to force my decision—in which case it’s not my decision—then this situation doesn’t happen. There might be people for whom Omega can predict with 100% certainty that they’re going to one-box even after Omega has told them his prediction, but I’m not one of them.
(I’m assuming here that people get offered the game regardless of their decision algorithm. If Omega only makes the offer to people whom he can predict certainly, we’re closer to a counterfactual mugging. At any rate, it changes the game significantly.)
I agree that in reality it is often impossible to predict someone’s actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.
EDIT: You’re really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you’re arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can’t predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don’t know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega’s decision before you make your choice would allow you to two-box, which it would.
Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.
Sorry, tapping out now.
EDIT: but brief reply to your edit: I’m well aware that you think they’re the same, and telling me that I’m not getting the point is super unhelpful.
Then you’re talking about an evil decision problem. But neither in the original nor in the genetic Newcombe’s problem is your source code investigated.
No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes).
The original does not specify how Omega makes his prediction, so it may well be by investigating source code.