The two-boxer never assumes that the decision isn’t predictable. They just say that the prediction can no longer be influenced and so you may as well gain the $1000 from the transparent box.
In terms of your hypothetical scenario, the question for the two-boxer will be whether the decision causally influences the result of this brain scan. If yes, then, the two-boxer will one-box (weird sentence). If no, the two-boxer will two-box.
the question for the two-boxer will be whether the decision causally influences the result of this brain scan. If yes, then, the two-boxer will one-box (weird sentence). If no, the two-boxer will two-box.
How would it not causally influence the brain scan? Are you saying two-boxers can make decisions without using their brains? ;-)
In any event, you didn’t answer the question I asked, which was at what point in time does the two-boxer label the decision “irrational”. Is it still “irrational” in their estimation to two-box, in the case where Omega decides after they do?
Notice that in both cases, the decision arises from information already available: the state of the chooser’s brain. So even in the original Newcomb’s problem, there is a causal connection between the chooser’s brain state and the boxes’ contents. That’s why I and other people are asking what role time plays: if you are using the correct causal model, where your current brain state has causal influence over your future decision, then the only distinction two-boxers can base their “irrational” label on is time, not causality.
The alternative is to argue that it is somehow possible to make a decision without using your brain, i.e., without past causes having any influence on your decision. You could maybe do that by flipping a coin, but then, is that really a “decision”, let alone “rational”?
If a two-boxer argues that their decision cannot cause a past event, they have the causal model wrong. The correct model is one of a past brain state influencing both Omega’s decision and your own future decision.
For me, the simulation argument made it obvious that one-boxing is the rational choice, because it makes clear that your decision is algorithmic. “Then I’ll just decide differently!” is, you see, still a fixed algorithm. There is no such thing as submitting one program to Omega and then running a different one, because you are the same program in both cases—and it’s that program that is causal over both Omega’s behavior and the “choice you would make in that situation”. Separating the decision from the deciding algorithm is incoherent.
As someone else mentioned, the only way the two-boxer’s statements make any sense is if you can separate a decision from the algorithm used to arrive at that decision. But nobody has presented any concrete theory by which one can arrive at a decision without using some algorithm, and whatever algorithm that is, is your “agent type”. It doesn’t make any sense to say that you can be the type of agent who decides one way, but when it actually comes to deciding, you’ll decide another way.
How does your hypothetical two-boxer respond to simulation or copy arguments? If you have no way of knowing whether you’re the simulated version of you, or the real version of you, which decision is rational then?
To put it another way, a two-boxer is arguing that they ought to two-box while simultaneously not being the sort of person who would two-box—an obvious contradiction. The two-boxer is either arguing for this contradiction, or arguing about the definitions of words by saying “yes, but that’s not what ‘rational’ means”.
Indeed, most two-boxers I’ve seen around here seem to alternate between those two positions, falling back to the other whenever one is successfully challenged.
In any event, you didn’t answer the question I asked, which was at what point in time does the two-boxer label the decision “irrational”. Is it still “irrational” in their estimation to two-box, in the case where Omega decides after they do?
Time is irrelevant to the two-boxer except as a proof of causal independence so there’s no interesting answer to this question. The two-boxer is concerned with causal independence. If a decision cannot help but causally influence the brain scan then the two-boxer would one-box.
Notice that in both cases, the decision arises from information already available: the state of the chooser’s brain. So even in the original Newcomb’s problem, there is a causal connection between the chooser’s brain state and the boxes’ contents. That’s why I and other people are asking what role time plays: if you are using the correct causal model, where your current brain state has causal influence over your future decision, then the only distinction two-boxers can base their “irrational” label on is time, not causality.
Two-boxers use a causal model where your current brain state has causal influence on your future decisions. They are interested in the causal effects of the decision not the brain state and hence the causal independence criterion does distinguish the cases in their view and they need not appeal to time.
If a two-boxer argues that their decision cannot cause a past event, they have the causal model wrong. The correct model is one of a past brain state influencing both Omega’s decision and your own future decision.
They have the right causal model. They just disagree about which downstream causal effects we should be considering.
For me, the simulation argument made it obvious that one-boxing is the rational choice, because it makes clear that your decision is algorithmic. “Then I’ll just decide differently!” is, you see, still a fixed algorithm. There is no such thing as submitting one program to Omega and then running a different one, because you are the same program in both cases—and it’s that program that is causal over both Omega’s behavior and the “choice you would make in that situation”. Separating the decision from the deciding algorithm is incoherent.
No-one denies this. Everyone agrees about what the best program is. They just disagree about what this means about the best decision. The two-boxer says that unfortunately the best program leads us to make a non-optimal decision which is a shame (but worth it because the benefits outweigh the cost). But, they say, this doesn’t change the fact that two-boxing is the optimal decision (while acknowledging that the optimal program one-boxes).
How does your hypothetical two-boxer respond to simulation or copy arguments? If you have no way of knowing whether you’re the simulated version of you, or the real version of you, which decision is rational then?
I suspect that different two-boxers would respond differently as anthropic style puzzles tend to elicit disagreement.
To put it another way, a two-boxer is arguing that they ought to two-box while simultaneously not being the sort of person who would two-box—an obvious contradiction. The two-boxer is either arguing for this contradiction, or arguing about the definitions of words by saying “yes, but that’s not what ‘rational’ means”.
Well, they’re saying that the optimal algorithm is a one-boxing algorithm while the optimal decision is two-boxing. They can explain why as well (algorithms have different causal effects to decisions). There is no immediate contradiction here (it would take serious argument to show a contradiction like, for example, an argument showing that decisions and algorithms are the same thing). For example, imagine a game where I choose a colour and then later choose a number between 1 and 4. With regards to the number, if you pick n, you get $n. With regards to the colour, if you pick red, you get $0, if you pick blue you get $5 but then don’t get a choice about the number (you are presumed to have picked 1). It is not contradictory to say that the optimal number to pick is 1 but the optimal colour to pick is blue. The two-boxer is saying something pretty similar here.
What “ought” you do, according to the two-boxer. Well that depends what decision you’re facing. If you’re facing a decision about what algorithm to adopt, then adopt the optimal algorithm (which one-boxers on all future versions of NP though not ones where the prediction has occurred). If you are not able to choose between algorithms but are just choosing a decision for this occasion then choose two-boxing. They do not give contradictory advice.
The problem here is that this “optimal” doesn’t cash out to anything in terms of real world prediction, which means it’s alberzle vs. bargulum all over again. A and B don’t disagree about predictions of what will happen in the world, meaning they are only disagreeing over which definition of a word to use.
In this context, a two boxer has to have some definition of “optimal” that doesn’t cash out the same as LWers cash out that word. Because our definition is based on what it actually gets you, not what it could have gotten you if the rules were different.
If you’re facing a decision about what algorithm to adopt, then adopt the optimal algorithm (which one-boxers on all future versions of NP though not ones where the prediction has occurred). If you are not able to choose between algorithms but are just choosing a decision for this occasion then choose two-boxing.
And what you just described is a decision algorithm, and it is that algorithm which Omega will use as input to decide what to put in the boxes. “Decide to use algorithm X” is itself an algorithm. This is why it’s incoherent to speak of a decision independently—it’s always being made by an algorithm.
“Just decide” is a decision procedure, so there’s actually no such thing as “just choosing for this occasion”.
And, given that algorithm, you lose on Newcomb’s problem, because what you described is a two-boxing decision algorithm: if it is ever actually in the Newcomb’s problem situation, an entity using that decision procedure will two-box, because “the prediction has occurred”. It is therefore trivial for me to play the part of Omega here and put nothing under the box when I play against you. I don’t need any superhuman predictive ability, I just need to know that you believe two boxing is “optimal” when the prediction has already been made. If you think that way, then your two-boxing is predictable ahead of time, and there is no temporal causation being violated.
Barring some perverse definition of “optimal”, you can’t think two-boxing is coherent unless you think that decisions can be made without using your brain—i.e. that you can screen off the effects of past brain state on present decisions.
Again, though, this is alberzle vs bargulum. It doesn’t seem there is any argument about the fact that your decision is the result of prior cause and effect. The two-boxer in this case seems to be saying “IF we lived in a world where decisions could be made non-deterministically, then the optimal thing to do would be to give every impression of being a one-boxer until the last minute.” A one boxer agrees that this conditional statement is true… but entirely irrelevant to the problem at hand, because it does not offer such a loophole.
So, as to the question of whether two boxing is optimal, we can say it’s alberzle-optimal but not bargulum-optimal, at which point there is nothing left to discuss.
The two-boxer never assumes that the decision isn’t predictable. They just say that the prediction can no longer be influenced and so you may as well gain the $1000 from the transparent box.
In terms of your hypothetical scenario, the question for the two-boxer will be whether the decision causally influences the result of this brain scan. If yes, then, the two-boxer will one-box (weird sentence). If no, the two-boxer will two-box.
How would it not causally influence the brain scan? Are you saying two-boxers can make decisions without using their brains? ;-)
In any event, you didn’t answer the question I asked, which was at what point in time does the two-boxer label the decision “irrational”. Is it still “irrational” in their estimation to two-box, in the case where Omega decides after they do?
Notice that in both cases, the decision arises from information already available: the state of the chooser’s brain. So even in the original Newcomb’s problem, there is a causal connection between the chooser’s brain state and the boxes’ contents. That’s why I and other people are asking what role time plays: if you are using the correct causal model, where your current brain state has causal influence over your future decision, then the only distinction two-boxers can base their “irrational” label on is time, not causality.
The alternative is to argue that it is somehow possible to make a decision without using your brain, i.e., without past causes having any influence on your decision. You could maybe do that by flipping a coin, but then, is that really a “decision”, let alone “rational”?
If a two-boxer argues that their decision cannot cause a past event, they have the causal model wrong. The correct model is one of a past brain state influencing both Omega’s decision and your own future decision.
For me, the simulation argument made it obvious that one-boxing is the rational choice, because it makes clear that your decision is algorithmic. “Then I’ll just decide differently!” is, you see, still a fixed algorithm. There is no such thing as submitting one program to Omega and then running a different one, because you are the same program in both cases—and it’s that program that is causal over both Omega’s behavior and the “choice you would make in that situation”. Separating the decision from the deciding algorithm is incoherent.
As someone else mentioned, the only way the two-boxer’s statements make any sense is if you can separate a decision from the algorithm used to arrive at that decision. But nobody has presented any concrete theory by which one can arrive at a decision without using some algorithm, and whatever algorithm that is, is your “agent type”. It doesn’t make any sense to say that you can be the type of agent who decides one way, but when it actually comes to deciding, you’ll decide another way.
How does your hypothetical two-boxer respond to simulation or copy arguments? If you have no way of knowing whether you’re the simulated version of you, or the real version of you, which decision is rational then?
To put it another way, a two-boxer is arguing that they ought to two-box while simultaneously not being the sort of person who would two-box—an obvious contradiction. The two-boxer is either arguing for this contradiction, or arguing about the definitions of words by saying “yes, but that’s not what ‘rational’ means”.
Indeed, most two-boxers I’ve seen around here seem to alternate between those two positions, falling back to the other whenever one is successfully challenged.
Time is irrelevant to the two-boxer except as a proof of causal independence so there’s no interesting answer to this question. The two-boxer is concerned with causal independence. If a decision cannot help but causally influence the brain scan then the two-boxer would one-box.
Two-boxers use a causal model where your current brain state has causal influence on your future decisions. They are interested in the causal effects of the decision not the brain state and hence the causal independence criterion does distinguish the cases in their view and they need not appeal to time.
They have the right causal model. They just disagree about which downstream causal effects we should be considering.
No-one denies this. Everyone agrees about what the best program is. They just disagree about what this means about the best decision. The two-boxer says that unfortunately the best program leads us to make a non-optimal decision which is a shame (but worth it because the benefits outweigh the cost). But, they say, this doesn’t change the fact that two-boxing is the optimal decision (while acknowledging that the optimal program one-boxes).
I suspect that different two-boxers would respond differently as anthropic style puzzles tend to elicit disagreement.
Well, they’re saying that the optimal algorithm is a one-boxing algorithm while the optimal decision is two-boxing. They can explain why as well (algorithms have different causal effects to decisions). There is no immediate contradiction here (it would take serious argument to show a contradiction like, for example, an argument showing that decisions and algorithms are the same thing). For example, imagine a game where I choose a colour and then later choose a number between 1 and 4. With regards to the number, if you pick n, you get $n. With regards to the colour, if you pick red, you get $0, if you pick blue you get $5 but then don’t get a choice about the number (you are presumed to have picked 1). It is not contradictory to say that the optimal number to pick is 1 but the optimal colour to pick is blue. The two-boxer is saying something pretty similar here.
What “ought” you do, according to the two-boxer. Well that depends what decision you’re facing. If you’re facing a decision about what algorithm to adopt, then adopt the optimal algorithm (which one-boxers on all future versions of NP though not ones where the prediction has occurred). If you are not able to choose between algorithms but are just choosing a decision for this occasion then choose two-boxing. They do not give contradictory advice.
Taboo “optimal”.
The problem here is that this “optimal” doesn’t cash out to anything in terms of real world prediction, which means it’s alberzle vs. bargulum all over again. A and B don’t disagree about predictions of what will happen in the world, meaning they are only disagreeing over which definition of a word to use.
In this context, a two boxer has to have some definition of “optimal” that doesn’t cash out the same as LWers cash out that word. Because our definition is based on what it actually gets you, not what it could have gotten you if the rules were different.
And what you just described is a decision algorithm, and it is that algorithm which Omega will use as input to decide what to put in the boxes. “Decide to use algorithm X” is itself an algorithm. This is why it’s incoherent to speak of a decision independently—it’s always being made by an algorithm.
“Just decide” is a decision procedure, so there’s actually no such thing as “just choosing for this occasion”.
And, given that algorithm, you lose on Newcomb’s problem, because what you described is a two-boxing decision algorithm: if it is ever actually in the Newcomb’s problem situation, an entity using that decision procedure will two-box, because “the prediction has occurred”. It is therefore trivial for me to play the part of Omega here and put nothing under the box when I play against you. I don’t need any superhuman predictive ability, I just need to know that you believe two boxing is “optimal” when the prediction has already been made. If you think that way, then your two-boxing is predictable ahead of time, and there is no temporal causation being violated.
Barring some perverse definition of “optimal”, you can’t think two-boxing is coherent unless you think that decisions can be made without using your brain—i.e. that you can screen off the effects of past brain state on present decisions.
Again, though, this is alberzle vs bargulum. It doesn’t seem there is any argument about the fact that your decision is the result of prior cause and effect. The two-boxer in this case seems to be saying “IF we lived in a world where decisions could be made non-deterministically, then the optimal thing to do would be to give every impression of being a one-boxer until the last minute.” A one boxer agrees that this conditional statement is true… but entirely irrelevant to the problem at hand, because it does not offer such a loophole.
So, as to the question of whether two boxing is optimal, we can say it’s alberzle-optimal but not bargulum-optimal, at which point there is nothing left to discuss.