The setting has a sample space, as in expected utility theory, with situations that take place in some event (let’s call it a situation event) and offer a choice between smaller events resulting from taking alternative actions. The misleading UDT convention is to call the situation event “actual”. It’s misleading because the goal is to optimize expected utility over the whole sample space, not just over the situation event, so the places on the sample space outside the situation event are effectively still in play, still remain relevant, not ruled out by the particular situation event being “actual”.
Alright. But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot. One event out of the sample space has occurred, and the others have failed to occur. Why would you continue to attempt to achieve that goal, toward which you are no longer capable of taking any action?
by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot
That goal may be moot for some ways of doing decisions. For UDT it’s not moot, it’s the only thing that we care about instead. And calling some situation or another “actual” has no effect at all on the goal, and on the process of decision making in any situation, actual or otherwise, that’s what makes the goal and the decision process reflectively stable.
“But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot.”
This is what we agree on. If you’re in the situation with a bomb, all that matters is the bomb.
My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb.
Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that’s what’s true with probability 1 (or almost 1).
And so I believe that’s where our disagreement lies. I think Newcomblike problems are often “trick questions” that can be resolved in two ways, one leaning more towards your interpretation.
In spirit of Vladimir’s points, if I annoyed you, I do apologize. I can get quite intense in such discussions.
This is what we agree on. If you’re in the situation with a bomb, all that matters is the bomb.
But that’s false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it’s not the case that all that matters is the bomb (or even a bomb).
Hmm, interesting. I don’t know much about UDT. From and FDT perspective, I’d say that if you’re in the situation with the bomb, your decision procedure already Right-boxed and therefore you’re Right-boxing again, as logical necessity. (Making the problem very interesting.)
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?
The setting has a sample space, as in expected utility theory, with situations that take place in some event (let’s call it a situation event) and offer a choice between smaller events resulting from taking alternative actions. The misleading UDT convention is to call the situation event “actual”. It’s misleading because the goal is to optimize expected utility over the whole sample space, not just over the situation event, so the places on the sample space outside the situation event are effectively still in play, still remain relevant, not ruled out by the particular situation event being “actual”.
Alright. But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot. One event out of the sample space has occurred, and the others have failed to occur. Why would you continue to attempt to achieve that goal, toward which you are no longer capable of taking any action?
That goal may be moot for some ways of doing decisions. For UDT it’s not moot, it’s the only thing that we care about instead. And calling some situation or another “actual” has no effect at all on the goal, and on the process of decision making in any situation, actual or otherwise, that’s what makes the goal and the decision process reflectively stable.
“But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot.”
This is what we agree on. If you’re in the situation with a bomb, all that matters is the bomb.
My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb.
Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that’s what’s true with probability 1 (or almost 1).
And so I believe that’s where our disagreement lies. I think Newcomblike problems are often “trick questions” that can be resolved in two ways, one leaning more towards your interpretation.
In spirit of Vladimir’s points, if I annoyed you, I do apologize. I can get quite intense in such discussions.
But that’s false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it’s not the case that all that matters is the bomb (or even a bomb).
Hmm, interesting. I don’t know much about UDT. From and FDT perspective, I’d say that if you’re in the situation with the bomb, your decision procedure already Right-boxed and therefore you’re Right-boxing again, as logical necessity. (Making the problem very interesting.)
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?