One-boxing is the rational decision; in LW parlance “rational decision” means “the thing that you do to win.” I don’t think splitting hairs about this is productive or interesting.
I agree. A semantic debate is uninteresting. My original assumption about the differences between two-boxing philosophers and one-boxing LWers was that the two groups used words differently and were engaged in different missions.
If you think the difference is just:
(a) semantic;
(b) a difference of missions;
(c) a different view of which missions are important
then I agree and I also agree that a long hair splitting debate is uninteresting.
However, my impression was that some people on LW seem to think there is more than a semantic debate going on (for example, my impression was that this is what Eliezer thought). This assumption is what motivated the writing of this post. If you think this assumption is wrong, it would be great to know as if this is the case, I now understand what is going on.
There is more than a semantic debate going on to the extent that two-boxers are of the opinion that if they faced an actual Newcomb’s problem, then what they should actually do is to actually two-box. This isn’t a disagreement about semantics but about what you should actually do in a certain kind of situation.
Okay, but why does the two-boxer care about decisions when agent type appears to be what causes winning (on Newcomblike problems)?
The two-boxer cares about decisions because they use the word decision to refer to those things we can control. So they say that we can’t control our past agent type but can control our taking of the one or two boxes. Of course, a long argument can be held about what notion of “control” we should appeal to here but it’s not immediately obvious to me that the two-boxer is wrong to care about decisions in their sense. So they would say that what thing we care about depends not only on what things can cause the best outcome but also on whether we can exert control over these things. The basic claim here seems reasonable enough.
Yes, and then their daughters die. Again, if a long argument outputs a conclusion you know is wrong, you know there’s something wrong with the argument even if you don’t know what it is.
It’s not clear to me that the argument outputs the wrong conclusion. Their daughters die because of their agent type at time of prediction not because of their decision and they can’t control their agent type at this past time so they don’t try to. It’s unclear that someone is irrational for exerting the best influence they can. Of course, this is all old debate so I don’t think we’re really progressing things here.
they can’t control their agent type at this past time so they don’t try to.
But if they didn’t think this, then their daughters could live. You don’t think, in this situation, you would even try to stop thinking this way? I’m trying to trigger a shut up and do the impossible intuition here, but if you insist on splitting hairs, then I agree that this conversation won’t go anywhere.
Yes, if the two boxer had a different agent type in the past then their daughters would live. No disagreement there. But I don’t think I’m splitting hairs by thinking this doesn’t immediately imply that one-boxing is the rational decision (rather, I think you’re failing to acknowledge the possibility of potentially relevant distinctions).
I’m not actually convinced by the two-boxing arguments but I don’t think they’re as obviously flawed as you seem to. And yes, I think we now agree on one thing at least (further conversation will probably not go anywhere) so I’m going to leave things at that.
As the argument goes, you can’t control your past selves, but that isn’t the form of the experiment. The only self that you’re controlling is the one deciding whether to one-box (equivalently, whether to be a one-boxer).
See, that is the self that past Omega is paying attention to in order to figure out how much money to put in the box. That’s right, past Omega is watching current you to figure out whether or not to kill your daughter / put money in the box. It doesn’t matter how he does it, all that matters is whether or not your current self decides to one box.
To follow a thought experiment I found enlightening here, how is it that past Omega knows whether or not you’re a one-boxer? In any simulation he could run of your brain, the simulated you could just know it’s a simulation and then Omega wouldn’t get the correct result, right? But, as we know, he does get the result right, almost all of the time. Ergo, the simulated you looks outside, it sees a bird on a tree. If it uses the bathroom, the toilet might clog. Any giveaway might make the selfish you try to two-box while still one-boxing in real life.
The point? How do you know that current you isn’t the simulation past Omega is using to figure out whether to kill your daughter? Are philosophical claims about the irreducibility of intentionality enough to take the risk?
One-boxing is the rational decision; in LW parlance “rational decision” means “the thing that you do to win.” I don’t think splitting hairs about this is productive or interesting.
I agree. A semantic debate is uninteresting. My original assumption about the differences between two-boxing philosophers and one-boxing LWers was that the two groups used words differently and were engaged in different missions.
If you think the difference is just:
(a) semantic; (b) a difference of missions; (c) a different view of which missions are important
then I agree and I also agree that a long hair splitting debate is uninteresting.
However, my impression was that some people on LW seem to think there is more than a semantic debate going on (for example, my impression was that this is what Eliezer thought). This assumption is what motivated the writing of this post. If you think this assumption is wrong, it would be great to know as if this is the case, I now understand what is going on.
There is more than a semantic debate going on to the extent that two-boxers are of the opinion that if they faced an actual Newcomb’s problem, then what they should actually do is to actually two-box. This isn’t a disagreement about semantics but about what you should actually do in a certain kind of situation.
Okay. Clarified, so to return to:
The two-boxer cares about decisions because they use the word decision to refer to those things we can control. So they say that we can’t control our past agent type but can control our taking of the one or two boxes. Of course, a long argument can be held about what notion of “control” we should appeal to here but it’s not immediately obvious to me that the two-boxer is wrong to care about decisions in their sense. So they would say that what thing we care about depends not only on what things can cause the best outcome but also on whether we can exert control over these things. The basic claim here seems reasonable enough.
Yes, and then their daughters die. Again, if a long argument outputs a conclusion you know is wrong, you know there’s something wrong with the argument even if you don’t know what it is.
It’s not clear to me that the argument outputs the wrong conclusion. Their daughters die because of their agent type at time of prediction not because of their decision and they can’t control their agent type at this past time so they don’t try to. It’s unclear that someone is irrational for exerting the best influence they can. Of course, this is all old debate so I don’t think we’re really progressing things here.
But if they didn’t think this, then their daughters could live. You don’t think, in this situation, you would even try to stop thinking this way? I’m trying to trigger a shut up and do the impossible intuition here, but if you insist on splitting hairs, then I agree that this conversation won’t go anywhere.
Yes, if the two boxer had a different agent type in the past then their daughters would live. No disagreement there. But I don’t think I’m splitting hairs by thinking this doesn’t immediately imply that one-boxing is the rational decision (rather, I think you’re failing to acknowledge the possibility of potentially relevant distinctions).
I’m not actually convinced by the two-boxing arguments but I don’t think they’re as obviously flawed as you seem to. And yes, I think we now agree on one thing at least (further conversation will probably not go anywhere) so I’m going to leave things at that.
As the argument goes, you can’t control your past selves, but that isn’t the form of the experiment. The only self that you’re controlling is the one deciding whether to one-box (equivalently, whether to be a one-boxer).
See, that is the self that past Omega is paying attention to in order to figure out how much money to put in the box. That’s right, past Omega is watching current you to figure out whether or not to kill your daughter / put money in the box. It doesn’t matter how he does it, all that matters is whether or not your current self decides to one box.
To follow a thought experiment I found enlightening here, how is it that past Omega knows whether or not you’re a one-boxer? In any simulation he could run of your brain, the simulated you could just know it’s a simulation and then Omega wouldn’t get the correct result, right? But, as we know, he does get the result right, almost all of the time. Ergo, the simulated you looks outside, it sees a bird on a tree. If it uses the bathroom, the toilet might clog. Any giveaway might make the selfish you try to two-box while still one-boxing in real life.
The point? How do you know that current you isn’t the simulation past Omega is using to figure out whether to kill your daughter? Are philosophical claims about the irreducibility of intentionality enough to take the risk?