In TDT we don’t do any severance! Nothing is uncaused, not our decision, nor our decision algorithm either. Trying to do causal severance is a basic root of paradoxes because things are not uncaused in real life. What we do rather is condition on the start state of our program, thereby screening off the universe (not unlawfully severing it), and factor out our uncertainty about the logical output of the program given its input. Since in real life most things we do to the universe should not change this logical fact, nor will observing this logical fact tell us which non-impossible possible world we are living it, it shouldn’t give us any news about the nodes above, once we’ve screened off the algorithm. It does, however, give us logical news about Omega’s output, and of course about which boxes we’ll end up with.
My reading of this is that you use influence diagrams, not Bayes nets; you think of your decision as influenced by things preceding it, but not as an uncertainty node. Is that a fair reading, or am I missing something?
My instant reaction upon hearing this is to try to come up with cases where they DO change that logical fact. Holding of on proposing solutions for now.
So, for this issue I would note that for the coinflip to influence the decision algorithm, there needs to be an arrow from the coinflip to the decision algorithm. Consider two situations:
Omega explains the counterfactual mugging deal, learns whether you would pay if the coin comes up tails, and then tells you how the coin came up.
Omega tells you how the coin comes up, explains the counterfactual mugging deal, and then learns whether you would pay if the coin comes up tails.
Those have different Bayes nets and so it can be entirely consistent for TDT to output different strategies in each.
Typo corrected! And yes, that is the CDT-TDT debate—but not really relevant here.
In TDT we don’t do any severance! Nothing is uncaused, not our decision, nor our decision algorithm either. Trying to do causal severance is a basic root of paradoxes because things are not uncaused in real life. What we do rather is condition on the start state of our program, thereby screening off the universe (not unlawfully severing it), and factor out our uncertainty about the logical output of the program given its input. Since in real life most things we do to the universe should not change this logical fact, nor will observing this logical fact tell us which non-impossible possible world we are living it, it shouldn’t give us any news about the nodes above, once we’ve screened off the algorithm. It does, however, give us logical news about Omega’s output, and of course about which boxes we’ll end up with.
My reading of this is that you use influence diagrams, not Bayes nets; you think of your decision as influenced by things preceding it, but not as an uncertainty node. Is that a fair reading, or am I missing something?
I stand corrected, and have corrected it.
My instant reaction upon hearing this is to try to come up with cases where they DO change that logical fact. Holding of on proposing solutions for now.
So, for this issue I would note that for the coinflip to influence the decision algorithm, there needs to be an arrow from the coinflip to the decision algorithm. Consider two situations:
Omega explains the counterfactual mugging deal, learns whether you would pay if the coin comes up tails, and then tells you how the coin came up.
Omega tells you how the coin comes up, explains the counterfactual mugging deal, and then learns whether you would pay if the coin comes up tails.
Those have different Bayes nets and so it can be entirely consistent for TDT to output different strategies in each.