You infer the existence of me burning to death from what’s stated in the problem as well. There’s no difference.
I do have the assumption of subjunctive dependence. But without that one—if, say, the predictor predicts by looking at the color of my shoes—then I don’t Left-box anyway.
You infer the existence of me burning to death from what’s stated in the problem as well. There’s no difference.
Of course there’s a difference: inferring burning to death just depends on the perfectly ordinary assumption of cause and effect, plus what is very explicitly stated in the problem. Inferring the existence of other worlds depends on much more esoteric assumptions that that. There’s really no comparison at all.
I do have the assumption of subjunctive dependence.
Not only is that not the only assumption required, it’s not even clear what it means to “assume” subjunctive dependence. Sure, it’s stipulated that the predictor is usually (but not quite always!) right about what you’ll do. What else is there to this “assumption” than that?
But how that leads to “other worlds exist” and “it’s meaningful to aggregate utility across them” and so on… I have no idea.
If they’re just possible worlds, then why do they matter? They’re not actual worlds, after all (by the time the described scenario is happening, it’s too late for any of them to be actual!). So… what’s the relevance?
The UDT convention is that other possible worlds remain relevant, even when you find yourself in a possible world that isn’t compatible with their actuality. It’s confusing to discuss this general point as if it’s specific to this contentious thought experiment.
The setting has a sample space, as in expected utility theory, with situations that take place in some event (let’s call it a situation event) and offer a choice between smaller events resulting from taking alternative actions. The misleading UDT convention is to call the situation event “actual”. It’s misleading because the goal is to optimize expected utility over the whole sample space, not just over the situation event, so the places on the sample space outside the situation event are effectively still in play, still remain relevant, not ruled out by the particular situation event being “actual”.
Alright. But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot. One event out of the sample space has occurred, and the others have failed to occur. Why would you continue to attempt to achieve that goal, toward which you are no longer capable of taking any action?
by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot
That goal may be moot for some ways of doing decisions. For UDT it’s not moot, it’s the only thing that we care about instead. And calling some situation or another “actual” has no effect at all on the goal, and on the process of decision making in any situation, actual or otherwise, that’s what makes the goal and the decision process reflectively stable.
“But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot.”
This is what we agree on. If you’re in the situation with a bomb, all that matters is the bomb.
My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb.
Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that’s what’s true with probability 1 (or almost 1).
And so I believe that’s where our disagreement lies. I think Newcomblike problems are often “trick questions” that can be resolved in two ways, one leaning more towards your interpretation.
In spirit of Vladimir’s points, if I annoyed you, I do apologize. I can get quite intense in such discussions.
This is what we agree on. If you’re in the situation with a bomb, all that matters is the bomb.
But that’s false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it’s not the case that all that matters is the bomb (or even a bomb).
Hmm, interesting. I don’t know much about UDT. From and FDT perspective, I’d say that if you’re in the situation with the bomb, your decision procedure already Right-boxed and therefore you’re Right-boxing again, as logical necessity. (Making the problem very interesting.)
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?
If I ever find myself in the Bomb scenario, I Right-box. Because in that scenario, the predictor’s model of me already Right-boxed, and therefore I do, too—not as a decision, per se, but as a logical consequence.
The correct decision is another question—that’s Left-boxing, because the decision is being made in two places. If I find myself in the Bomb scenario, that just means the decision to Right-box was already made.
The Bomb problem asks what the correct decision is, and makes clear (at least under my assumption) that the decision is made at 2 points in time. At that first point (in the predictor’s head), Left-boxing leads to the most utility: it avoids burning to death for free. Note that at that point, there is not yet a bomb in Left!
If I ever find myself in the Bomb scenario, I Right-box.
If we agree on that, then I don’t understand what it is that you think we disagree on! (Although the “not as a decision, per se” bit seems… contentless.)
The Bomb problem asks what the correct decision is,
No, it asks what decision you should make. And we apparently agree that the answer is “Right”.
What does it mean to say that Left-boxing is “the correct decision” if you then say that the decision you’d actually make would be to Right-box? This seems to be straightforwardly contradictory, in a way that renders the claim nonsensical.
I read all your comments in this thread. But you seem to be saying things that, in a very straightforward way, simply don’t make any sense…
Alright. The correct decision is Left-boxing, because that means the predictor’s model Left-boxed (and so do I), letting me live for free. Because, at the point where the predictor models me, the Bomb isn’t placed yet (and never will be).
However, IF I’m in the Bomb scenario, then the predictor’s model already Right-boxed. Then, because of subjunctive dependence, it’s apparently not possible for me to Left-box, just as it is impossible for two calculators to give a different result to 2 + 2.
Well, the Bomb scenario is what we’re given. So the first paragraph you just wrote there is… irrelevant? Inapplicable? What’s the point of it? It’s answering a question that’s not being asked.
As for the last sentence of your comment, I don’t understand what you mean by it. Certainly it’s possible for you to Left-box; you just go ahead and Left-box. This would be a bad idea, of course! Because you’d burn to death. But you could do it! You just shouldn’t—a point on which we, apparently, agree.
The bottom line is: to the actual single question the scenario asks—which box do you choose, finding yourself in the given situation?—we give the same answer. Yes?
The bottom line is: to the actual single question the scenario asks—which box do you choose, finding yourself in the given situation?—we give the same answer. Yes?
The bottom line is that Bomb is a decision problem. If I am still free to make a decision (which I suppose I am, otherwise it isn’t much of a problem), then the decision I make is made at 2 points in time. And then, Left-boxing is the better decision.
Yes, the Bomb is what we’re given. But with the very reasonable assumption of subjunctive dependence, it specifies what I am saying...
We agree that if I would be there, I would Right-box, but also everybody would then Right-box, as a logical necessity (well, 1 in a trillion trillion error rate, sure). It has nothing to do with correct or incorrect decisions, viewed like that: the decision is already hard coded into the problem statement, because of the subjunctive dependence.
“But you can just Left-box” doesn’t work: that’s like expecting one calculator to answer to 2 + 2 differently than another calculator.
I think it’s better to explain to such people the problem where the predictor is perfect, and then generalize to an imperfect predictor. They don’t understand the general principle of your present choices pseudo-overwriting the entire timeline and can’t think in the seemingly-noncausal way that optimal decision-making requires. By jumping right to an imperfect predictor, the principle becomes, I think, too complicated to explain.
You infer the existence of me burning to death from what’s stated in the problem as well. There’s no difference.
I do have the assumption of subjunctive dependence. But without that one—if, say, the predictor predicts by looking at the color of my shoes—then I don’t Left-box anyway.
Of course there’s a difference: inferring burning to death just depends on the perfectly ordinary assumption of cause and effect, plus what is very explicitly stated in the problem. Inferring the existence of other worlds depends on much more esoteric assumptions that that. There’s really no comparison at all.
Not only is that not the only assumption required, it’s not even clear what it means to “assume” subjunctive dependence. Sure, it’s stipulated that the predictor is usually (but not quite always!) right about what you’ll do. What else is there to this “assumption” than that?
But how that leads to “other worlds exist” and “it’s meaningful to aggregate utility across them” and so on… I have no idea.
Inferring that I don’t burn to death depends on
Omega modelling my decision procedure
Cause and effect from there.
That’s it. No esoteric assumptions. I’m not talking about a multiverse with worlds existing next to each other or whatever, just possible worlds.
If they’re just possible worlds, then why do they matter? They’re not actual worlds, after all (by the time the described scenario is happening, it’s too late for any of them to be actual!). So… what’s the relevance?
The world you’re describing is just as much a possible world as the ones I describe. That’s my point.
Huh? It’s the world that’s stipulated to be the actual world, in the scenario.
No, it isn’t. In the world that’s stipulated, you still have to make your decision.
That decision is made in my head and in the predictor’s head. That’s the key.
But if you choose Left, you will burn to death. I’ve already quoted that. Says so right in the OP.
That’s one possible world. There are many more where I don’t burn to death.
But… there aren’t, though. They’ve already failed to be possible, at that point.
The UDT convention is that other possible worlds remain relevant, even when you find yourself in a possible world that isn’t compatible with their actuality. It’s confusing to discuss this general point as if it’s specific to this contentious thought experiment.
Well, we’re discussing it in the context of this thought experiment. If the point applies more generally, then so be it.
Can you explain (or link to an explanation of) what is meant by “convention” and “remain relevant” here?
The setting has a sample space, as in expected utility theory, with situations that take place in some event (let’s call it a situation event) and offer a choice between smaller events resulting from taking alternative actions. The misleading UDT convention is to call the situation event “actual”. It’s misleading because the goal is to optimize expected utility over the whole sample space, not just over the situation event, so the places on the sample space outside the situation event are effectively still in play, still remain relevant, not ruled out by the particular situation event being “actual”.
Alright. But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot. One event out of the sample space has occurred, and the others have failed to occur. Why would you continue to attempt to achieve that goal, toward which you are no longer capable of taking any action?
That goal may be moot for some ways of doing decisions. For UDT it’s not moot, it’s the only thing that we care about instead. And calling some situation or another “actual” has no effect at all on the goal, and on the process of decision making in any situation, actual or otherwise, that’s what makes the goal and the decision process reflectively stable.
“But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot.”
This is what we agree on. If you’re in the situation with a bomb, all that matters is the bomb.
My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb.
Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that’s what’s true with probability 1 (or almost 1).
And so I believe that’s where our disagreement lies. I think Newcomblike problems are often “trick questions” that can be resolved in two ways, one leaning more towards your interpretation.
In spirit of Vladimir’s points, if I annoyed you, I do apologize. I can get quite intense in such discussions.
But that’s false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it’s not the case that all that matters is the bomb (or even a bomb).
Hmm, interesting. I don’t know much about UDT. From and FDT perspective, I’d say that if you’re in the situation with the bomb, your decision procedure already Right-boxed and therefore you’re Right-boxing again, as logical necessity. (Making the problem very interesting.)
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?
Not at the point in time where Omega models my decision procedure.
One thing we do agree on:
If I ever find myself in the Bomb scenario, I Right-box. Because in that scenario, the predictor’s model of me already Right-boxed, and therefore I do, too—not as a decision, per se, but as a logical consequence.
The correct decision is another question—that’s Left-boxing, because the decision is being made in two places. If I find myself in the Bomb scenario, that just means the decision to Right-box was already made.
The Bomb problem asks what the correct decision is, and makes clear (at least under my assumption) that the decision is made at 2 points in time. At that first point (in the predictor’s head), Left-boxing leads to the most utility: it avoids burning to death for free. Note that at that point, there is not yet a bomb in Left!
If we agree on that, then I don’t understand what it is that you think we disagree on! (Although the “not as a decision, per se” bit seems… contentless.)
No, it asks what decision you should make. And we apparently agree that the answer is “Right”.
Hmmm, I thought that comment might clear things up, but apparently it doesn’t. And I’m left wondering if you even read it.
Anyway, Left-boxing is the correct decision. But since you didn’t really engage with my points, I’ll be leaving now.
What does it mean to say that Left-boxing is “the correct decision” if you then say that the decision you’d actually make would be to Right-box? This seems to be straightforwardly contradictory, in a way that renders the claim nonsensical.
I read all your comments in this thread. But you seem to be saying things that, in a very straightforward way, simply don’t make any sense…
Alright. The correct decision is Left-boxing, because that means the predictor’s model Left-boxed (and so do I), letting me live for free. Because, at the point where the predictor models me, the Bomb isn’t placed yet (and never will be).
However, IF I’m in the Bomb scenario, then the predictor’s model already Right-boxed. Then, because of subjunctive dependence, it’s apparently not possible for me to Left-box, just as it is impossible for two calculators to give a different result to 2 + 2.
Well, the Bomb scenario is what we’re given. So the first paragraph you just wrote there is… irrelevant? Inapplicable? What’s the point of it? It’s answering a question that’s not being asked.
As for the last sentence of your comment, I don’t understand what you mean by it. Certainly it’s possible for you to Left-box; you just go ahead and Left-box. This would be a bad idea, of course! Because you’d burn to death. But you could do it! You just shouldn’t—a point on which we, apparently, agree.
The bottom line is: to the actual single question the scenario asks—which box do you choose, finding yourself in the given situation?—we give the same answer. Yes?
The bottom line is that Bomb is a decision problem. If I am still free to make a decision (which I suppose I am, otherwise it isn’t much of a problem), then the decision I make is made at 2 points in time. And then, Left-boxing is the better decision.
Yes, the Bomb is what we’re given. But with the very reasonable assumption of subjunctive dependence, it specifies what I am saying...
We agree that if I would be there, I would Right-box, but also everybody would then Right-box, as a logical necessity (well, 1 in a trillion trillion error rate, sure). It has nothing to do with correct or incorrect decisions, viewed like that: the decision is already hard coded into the problem statement, because of the subjunctive dependence.
“But you can just Left-box” doesn’t work: that’s like expecting one calculator to answer to 2 + 2 differently than another calculator.
Unless I’m missing something, it’s possible you’re in the predictor’s simulation, in which case it’s possible you will Left-box.
Excellent point!
I think it’s better to explain to such people the problem where the predictor is perfect, and then generalize to an imperfect predictor. They don’t understand the general principle of your present choices pseudo-overwriting the entire timeline and can’t think in the seemingly-noncausal way that optimal decision-making requires. By jumping right to an imperfect predictor, the principle becomes, I think, too complicated to explain.