Updateless (not Universal) Decision Theory is almost the same thing as TDT. Somewhat simplified: In TDT you act as though you were controlling the abstract computation that governs your action and take all other instances of that computation into account. In UDT you pretend to be the abstract computation.
I think you are mischarcterizing EDT, you should look at the Smoking Lesion problem to see how.
Your take on Causal Decision Theory (calling it “Classical” is perhaps not completely wrong, but better stick to Causal) is a bit better, but holding the past constant doesn’t eliminate all instances of CDT underperforming (from a TDT/UDT perspective).
I was going to do the Smoking Lesion problem, but EDT doesn’t seem to be well-defined under that. You know that you’re using EDT, which affects things weird. If this means that you’ll definitely not smoke, definitely not smoking would be optimal, since you’d be tied with every other strategy in which it’s known it would result in never smoking, but the same goes for definitely smoking.
In TDT you act as though you were controlling the abstract computation that governs your action and take all other instances of that computation into account. In UDT you pretend to be the abstract computation.
I was going to do the Smoking Lesion problem, but EDT doesn’t seem to be well-defined under that. You know that you’re using EDT, which affects things weird. If this means that you’ll definitely not smoke, definitely not smoking would be optimal, since you’d be tied with every other strategy in which it’s known it would result in never smoking, but the same goes for definitely smoking.
Huh? If you mean that knowledge of yourself being an EDT screens off your decision being evidence for subsequent cancer, what distinguishes this case from cases where that knowledge doesn’t screen off? Remember, you are not allowed to look at causal arrows, because that would make you a CDT actor.
Your decision is evidence. It’s just that if you knew before-hand that you were going to smoke, there’s nobody else that knew they’d smoke that does any better.
Thinking about it more, it’s just the idea of being certain about your future decisions that breaks it. I guess that means an ideal EDTist is a contradiction. If you assume that there’s some chance of doing something besides what EDT suggests, then it only gives the “never smoke” answer. If it gave “always smoke” then you’d be 99% sure or whatever, in which case the 1% who ended up not smoking would be better off.
Edit: No, that still doesn’t work. I don’t think you can count the 1% as using a different strategy. I’m going to have to think about this more.
UDT is conceptionally a lot simpler, but probably computationally more expensive.
As for differences in outcome, UDT implies perfect altruism between different instances of yourself (or at least every instance valuing each particular instance the same way all others do), TDT not necessarily (though it’s suggestive). There may be other differences but that was the first I could think of.
Updateless (not Universal) Decision Theory is almost the same thing as TDT. Somewhat simplified: In TDT you act as though you were controlling the abstract computation that governs your action and take all other instances of that computation into account. In UDT you pretend to be the abstract computation.
I think you are mischarcterizing EDT, you should look at the Smoking Lesion problem to see how.
Your take on Causal Decision Theory (calling it “Classical” is perhaps not completely wrong, but better stick to Causal) is a bit better, but holding the past constant doesn’t eliminate all instances of CDT underperforming (from a TDT/UDT perspective).
I was going to do the Smoking Lesion problem, but EDT doesn’t seem to be well-defined under that. You know that you’re using EDT, which affects things weird. If this means that you’ll definitely not smoke, definitely not smoking would be optimal, since you’d be tied with every other strategy in which it’s known it would result in never smoking, but the same goes for definitely smoking.
So, what do they do different?
Huh? If you mean that knowledge of yourself being an EDT screens off your decision being evidence for subsequent cancer, what distinguishes this case from cases where that knowledge doesn’t screen off? Remember, you are not allowed to look at causal arrows, because that would make you a CDT actor.
Your decision is evidence. It’s just that if you knew before-hand that you were going to smoke, there’s nobody else that knew they’d smoke that does any better.
Thinking about it more, it’s just the idea of being certain about your future decisions that breaks it. I guess that means an ideal EDTist is a contradiction. If you assume that there’s some chance of doing something besides what EDT suggests, then it only gives the “never smoke” answer. If it gave “always smoke” then you’d be 99% sure or whatever, in which case the 1% who ended up not smoking would be better off.
Edit: No, that still doesn’t work. I don’t think you can count the 1% as using a different strategy. I’m going to have to think about this more.
UDT is conceptionally a lot simpler, but probably computationally more expensive.
As for differences in outcome, UDT implies perfect altruism between different instances of yourself (or at least every instance valuing each particular instance the same way all others do), TDT not necessarily (though it’s suggestive). There may be other differences but that was the first I could think of.