It’s not clear to me that the argument outputs the wrong conclusion. Their daughters die because of their agent type at time of prediction not because of their decision and they can’t control their agent type at this past time so they don’t try to. It’s unclear that someone is irrational for exerting the best influence they can. Of course, this is all old debate so I don’t think we’re really progressing things here.
they can’t control their agent type at this past time so they don’t try to.
But if they didn’t think this, then their daughters could live. You don’t think, in this situation, you would even try to stop thinking this way? I’m trying to trigger a shut up and do the impossible intuition here, but if you insist on splitting hairs, then I agree that this conversation won’t go anywhere.
Yes, if the two boxer had a different agent type in the past then their daughters would live. No disagreement there. But I don’t think I’m splitting hairs by thinking this doesn’t immediately imply that one-boxing is the rational decision (rather, I think you’re failing to acknowledge the possibility of potentially relevant distinctions).
I’m not actually convinced by the two-boxing arguments but I don’t think they’re as obviously flawed as you seem to. And yes, I think we now agree on one thing at least (further conversation will probably not go anywhere) so I’m going to leave things at that.
As the argument goes, you can’t control your past selves, but that isn’t the form of the experiment. The only self that you’re controlling is the one deciding whether to one-box (equivalently, whether to be a one-boxer).
See, that is the self that past Omega is paying attention to in order to figure out how much money to put in the box. That’s right, past Omega is watching current you to figure out whether or not to kill your daughter / put money in the box. It doesn’t matter how he does it, all that matters is whether or not your current self decides to one box.
To follow a thought experiment I found enlightening here, how is it that past Omega knows whether or not you’re a one-boxer? In any simulation he could run of your brain, the simulated you could just know it’s a simulation and then Omega wouldn’t get the correct result, right? But, as we know, he does get the result right, almost all of the time. Ergo, the simulated you looks outside, it sees a bird on a tree. If it uses the bathroom, the toilet might clog. Any giveaway might make the selfish you try to two-box while still one-boxing in real life.
The point? How do you know that current you isn’t the simulation past Omega is using to figure out whether to kill your daughter? Are philosophical claims about the irreducibility of intentionality enough to take the risk?
It’s not clear to me that the argument outputs the wrong conclusion. Their daughters die because of their agent type at time of prediction not because of their decision and they can’t control their agent type at this past time so they don’t try to. It’s unclear that someone is irrational for exerting the best influence they can. Of course, this is all old debate so I don’t think we’re really progressing things here.
But if they didn’t think this, then their daughters could live. You don’t think, in this situation, you would even try to stop thinking this way? I’m trying to trigger a shut up and do the impossible intuition here, but if you insist on splitting hairs, then I agree that this conversation won’t go anywhere.
Yes, if the two boxer had a different agent type in the past then their daughters would live. No disagreement there. But I don’t think I’m splitting hairs by thinking this doesn’t immediately imply that one-boxing is the rational decision (rather, I think you’re failing to acknowledge the possibility of potentially relevant distinctions).
I’m not actually convinced by the two-boxing arguments but I don’t think they’re as obviously flawed as you seem to. And yes, I think we now agree on one thing at least (further conversation will probably not go anywhere) so I’m going to leave things at that.
As the argument goes, you can’t control your past selves, but that isn’t the form of the experiment. The only self that you’re controlling is the one deciding whether to one-box (equivalently, whether to be a one-boxer).
See, that is the self that past Omega is paying attention to in order to figure out how much money to put in the box. That’s right, past Omega is watching current you to figure out whether or not to kill your daughter / put money in the box. It doesn’t matter how he does it, all that matters is whether or not your current self decides to one box.
To follow a thought experiment I found enlightening here, how is it that past Omega knows whether or not you’re a one-boxer? In any simulation he could run of your brain, the simulated you could just know it’s a simulation and then Omega wouldn’t get the correct result, right? But, as we know, he does get the result right, almost all of the time. Ergo, the simulated you looks outside, it sees a bird on a tree. If it uses the bathroom, the toilet might clog. Any giveaway might make the selfish you try to two-box while still one-boxing in real life.
The point? How do you know that current you isn’t the simulation past Omega is using to figure out whether to kill your daughter? Are philosophical claims about the irreducibility of intentionality enough to take the risk?