Your analysis may or may not be helpful in some contexts, but I think there are at least some contexts where your analysis doesn’t solve things. For instance, in AI.
Suppose we want to create an AI to maximize some utility U. What program should we select?
One possibility would be to select a program which at every time it has to make a decision estimates the utility that will result from each possible action, and takes the action that will yield the highest utility.
This doesn’t work for Newcomblike problems. But what algorithm would work for such problems?
You say:
Let me just restate the thesis to double down: The answer to Newcomb’s problem depends on from which point in time the question is being asked. There’s no right way to answer the question without specifying this. When the problem is properly specified, there is a time inconsistency problem: in the moment, you should two-box; but if you’re deciding beforehand and able to commit, you shouldcommit to one-boxing.
And that’s fine, sort of. When creating the AI, we’re in the “deciding beforehand” scenario; we know that we want to commit to one-boxing. But the question is, what decision algorithm should we deploy ahead of time, such that once the algorithm reaches the moment of having to pick an action, it will one-box? In particular, how do we make this general and algorithmically efficient?
One possibility would be to select a program which at every time it has to make a decision estimates the utility that will result from each possible action, and takes the action that will yield the highest utility.
Why doesn’t this work for Newcomb’s problem? What is the (expected) utility for one-boxing? For two-boxing? Which is higher?
The “problem” part is that the utility for being predicted NOT to two-box is different from the utility for two-boxing. If the decision cannot influence the already-locked-in prediction (which is the default intuition behind CDT), it’s simple to correctly two-box and take your 1.001 million. If the decision is invisibly constrained to match the prediction, then it’s simple to one-box and take your 1m. Both are maximum available outcomes, in different decision-causality situations.
But conditional on making each decision, you can update your distribution for Omega’s prediction and calculate your EV accordingly. I guess that’s basically EDT though.
Your analysis may or may not be helpful in some contexts, but I think there are at least some contexts where your analysis doesn’t solve things. For instance, in AI.
Suppose we want to create an AI to maximize some utility U. What program should we select?
One possibility would be to select a program which at every time it has to make a decision estimates the utility that will result from each possible action, and takes the action that will yield the highest utility.
This doesn’t work for Newcomblike problems. But what algorithm would work for such problems?
You say:
And that’s fine, sort of. When creating the AI, we’re in the “deciding beforehand” scenario; we know that we want to commit to one-boxing. But the question is, what decision algorithm should we deploy ahead of time, such that once the algorithm reaches the moment of having to pick an action, it will one-box? In particular, how do we make this general and algorithmically efficient?
Why doesn’t this work for Newcomb’s problem? What is the (expected) utility for one-boxing? For two-boxing? Which is higher?
The “problem” part is that the utility for being predicted NOT to two-box is different from the utility for two-boxing. If the decision cannot influence the already-locked-in prediction (which is the default intuition behind CDT), it’s simple to correctly two-box and take your 1.001 million. If the decision is invisibly constrained to match the prediction, then it’s simple to one-box and take your 1m. Both are maximum available outcomes, in different decision-causality situations.
But conditional on making each decision, you can update your distribution for Omega’s prediction and calculate your EV accordingly. I guess that’s basically EDT though.