I’m claiming that this post is conflating an error in constructing an accurate world-map with an error in the decision theory.
The problem is not that CDT has an inaccurate world-map; the problem is that CDT has an accurate world map, and then breaks it. CDT would work much better with an inaccurate world-map, one in which its decision causally affects the prediction.
Having done some research, it turns out the thing I was actually pointing to was ratifiability and the stance that any reasonable separation of world-modeling and decision-selection should put ratifiability in the former rather than the latter. This specific claim isn’t new: From “Regret and Instability in causal decision theory”:
Second, while I agree that deliberative equilibrium is central to rational decision making, I disagree with Arntzenius that CDT needs to be ammended in any way to make it appropriately deliberational. In cases like Murder Lesion a deliberational perspective is forced on us by what CDT says. It says this: A rational agent should base her decisions on her best information about the outcomes her acts are likely to causally promote, and she should ignore information about what her acts merely indicate. In other words, as I have argued, the theory asks agents to conform to Full Information, which requires them to reason themselves into a state of equilibrium before they act. The deliberational perspective is thus already a part of CDT
However, it’s clear to me now that you were discussing an older, more conventional, version of CDT[1] which does not have that property. With respect to that version, the thought-experiment goes through but, with respect to the version I believe to be sensible, it doesn’t[2].
[1] I’m actually kind of surprised that the conventional version of CDT is that dumb—and I had to check a bunch of papers to verify that this was actually happening. Maybe if my memory had complied at the time, it would’ve flagged your distinguishing between CDT and EDT here from past LessWrong articles I’ve read like CDT=EDT. But this wasn’t meant to be so I didn’t notice you were talking about something different.
[2] I am now confident it does not apply to the thing I’m referring to—the linked paper brings up “Death in Damascus” specifically as a place where ratifiable CDt does not fail
Can you clarify what you mean by “successfully formalised”? I’m not sure if I can answer that question but I can say the following:
Stanford’s encyclopedia has a discussion of ratifiability dating back to the 1960s and (by the 1980s) it has been applied to both EDT and CDT (which I’d expect, given that constraints on having an accurate world model should be independent of decision theory). This gives me confidence that it’s not just a random Less Wrong thing.
Abram Dempski from MIRI has a whole sequence on when CDT=EDT which leverages ratifiability as a sub-assumption. This gives me confidence that ratifiability is actually onto something (the Less Wrong stamp of approval is important!)
Whether any of this means that it’s been “successfully formalised”, I can’t really say. From the outside-view POV, I literally did not know about the conventional version of CDT until yesterday. Thus, I do not really view myself as someone currently capable of verifying the extent to which a decision theory has been successfully formalised. Still, I consider this version of CDT old enough historically and well-enough-discussed on Less Wrong by Known Smart People that I have high confidence in it.
The problem is not that CDT has an inaccurate world-map; the problem is that CDT has an accurate world map, and then breaks it. CDT would work much better with an inaccurate world-map, one in which its decision causally affects the prediction.
See this post for how you can hack that: https://www.lesswrong.com/posts/9m2fzjNSJmd3yxxKG/acdt-a-hack-y-acausal-decision-theory
Having done some research, it turns out the thing I was actually pointing to was ratifiability and the stance that any reasonable separation of world-modeling and decision-selection should put ratifiability in the former rather than the latter. This specific claim isn’t new: From “Regret and Instability in causal decision theory”:
However, it’s clear to me now that you were discussing an older, more conventional, version of CDT[1] which does not have that property. With respect to that version, the thought-experiment goes through but, with respect to the version I believe to be sensible, it doesn’t[2].
[1] I’m actually kind of surprised that the conventional version of CDT is that dumb—and I had to check a bunch of papers to verify that this was actually happening. Maybe if my memory had complied at the time, it would’ve flagged your distinguishing between CDT and EDT here from past LessWrong articles I’ve read like CDT=EDT. But this wasn’t meant to be so I didn’t notice you were talking about something different.
[2] I am now confident it does not apply to the thing I’m referring to—the linked paper brings up “Death in Damascus” specifically as a place where ratifiable CDt does not fail
Have they successfully formalised the newer CDT?
Can you clarify what you mean by “successfully formalised”? I’m not sure if I can answer that question but I can say the following:
Stanford’s encyclopedia has a discussion of ratifiability dating back to the 1960s and (by the 1980s) it has been applied to both EDT and CDT (which I’d expect, given that constraints on having an accurate world model should be independent of decision theory). This gives me confidence that it’s not just a random Less Wrong thing.
Abram Dempski from MIRI has a whole sequence on when CDT=EDT which leverages ratifiability as a sub-assumption. This gives me confidence that ratifiability is actually onto something (the Less Wrong stamp of approval is important!)
Whether any of this means that it’s been “successfully formalised”, I can’t really say. From the outside-view POV, I literally did not know about the conventional version of CDT until yesterday. Thus, I do not really view myself as someone currently capable of verifying the extent to which a decision theory has been successfully formalised. Still, I consider this version of CDT old enough historically and well-enough-discussed on Less Wrong by Known Smart People that I have high confidence in it.