(Mostly pasted from a conversation I had with esrogs)
While there’s some sense in which we’re eventually going to need to use decision making heuristics, and in which using CDT on a graphical model of a world is just a heuristic, there’s also a sense in which we don’t know what we’re approximating yet or how well our existing DTs approximate it.
My interest is in figuring out what the idealized process we want to approximate is first, and then figuring out the heuristics.
The whole “Newcomblike problems are the norm” thing is building towards the motivation of “this is why we need to better understand what we’re approximating” (although it could also be used to motivate “this is why we need better heuristics”, but that was not my point).
Your objection seems similar to Vaniver’s, in the main thread, that CDT could find a causal connection between its choice and the contents of the boxes in the Newcomb problem. This appeals to the intuition that there is some connection between the choice and the boxes (which there is), but fails to notice that the connection is acausal.
Or, in other words, it’s a good intuition that “something like the CDT algorithm” can solve Newcomb’s problem if you just give it a “good enough” world-model that allows it to identify these connections. But this involves using a non-causal world model. And, indeed, it is these non-causal world models that we must use to capture the intuition that you can win at Newcomb’s problem using a non-causal decision theory.
Whenever there are non-causal connections (as in Newcomb problems) you need to have a world model containing non-causal connections between nodes.
(Side note: EDT is underspecified, and various attempts to fully specify it can make it equivalent to CDT or TDT or UDT, but we only found the latter two specifications after discovering TDT/UDT. It doesn’t seem very useful to me to say that EDT works well/poorly unless you better specify EDT.)
I feel like there’s this problem where when I say “look at this clear-cut case of there being non-causal connections”, some people respond “but Newcomb problems are unrealistic”, and then when I say “look at these realistic cases where there are realistically acausal connections”, others say “ah, but this is not clear cut”—and that’s what you’re doing when you say
If you allow the decision-maker to think carefully through all the unconscious signals sent by her decisions, it’s less clear that there’s anything Newcomblike
I’m sympathetic to this claim, but hopefully you can see the thing that I’m trying to point to here, which is this: there really are scenarios where there are acausal logical connections (that we care about) in the world.
Surely you agree that information can propagate acausally, e.g. if I roll a die and write down the result in two envelopes and send one to alpha centauri and read the other after it gets there—I can learn what is in the envelope on alpha centauri “faster than light”; the physical causal separation does not affect information propagation. These things are often, but not always, related.
Similarly, the connections in the world that I care about are related to the information that I have, not to the causal connections between them. These things often correspond, but not always.
It is in this sense that CDT is doing the wrong thing: it’s not the “evaluate counterfactuals and pick the best option” part that’s the problem, it’s the “how do you construct the counterfactuals (and on what world-model)” that is the problem.
We will inevitably eventually need to use decision making heuristics, but at this point we don’t even know what we’re approximating, and We’re decidedly not looking specifically for “good decision-making heuristics” right now. We’re trying to figure out decision theory in an idealized/deterministic setting first, so that by the time we do resort to heuristics we’ll have some idea about what it is we’re trying to approximate.
I’m sympathetic to this claim, but hopefully you can see the thing that I’m trying to point to here, which is this: there really are scenarios where there are acausal logical connections (that we care about) in the world.
I agree with this—I think the absentminded driver is a particularly clean-cut case.
I was partly trying to offer an explanation of what was going on in e.g. discussions of Newcomb’s problem where people contrast CDT with EDT. Given that you say EDT isn’t even fully specified, it seems pretty clear that they’re interpreting it as a heuristic, but I’m not sure they’re always aware of that.
Surely you agree that information can propagate acausally
Yes—nice example.
We will inevitably eventually need to use decision making heuristics, but at this point we don’t even know what we’re approximating, and We’re decidedly not looking specifically for “good decision-making heuristics” right now.
I’m not entirely convinced by this. We can evaluate heuristics by saying “how well does implementing them perform?” (which just needs us to have models of the world and of value). I certainly think we can make meaningful judgements that some heuristics are better than others without knowing what the idealised form is.
That said, I’m sympathetic to the idea that studying the idealised form might be more valuable (although I’m not certain about that). The thrust of my post arguing that understanding the heuristics is valuable was to make it clear that I was trying to clarify the fact that some people end up discussing heuristics without realising it, rather than to attack such people.
(Mostly pasted from a conversation I had with esrogs)
While there’s some sense in which we’re eventually going to need to use decision making heuristics, and in which using CDT on a graphical model of a world is just a heuristic, there’s also a sense in which we don’t know what we’re approximating yet or how well our existing DTs approximate it.
My interest is in figuring out what the idealized process we want to approximate is first, and then figuring out the heuristics. The whole “Newcomblike problems are the norm” thing is building towards the motivation of “this is why we need to better understand what we’re approximating” (although it could also be used to motivate “this is why we need better heuristics”, but that was not my point).
Your objection seems similar to Vaniver’s, in the main thread, that CDT could find a causal connection between its choice and the contents of the boxes in the Newcomb problem. This appeals to the intuition that there is some connection between the choice and the boxes (which there is), but fails to notice that the connection is acausal.
Or, in other words, it’s a good intuition that “something like the CDT algorithm” can solve Newcomb’s problem if you just give it a “good enough” world-model that allows it to identify these connections. But this involves using a non-causal world model. And, indeed, it is these non-causal world models that we must use to capture the intuition that you can win at Newcomb’s problem using a non-causal decision theory.
Whenever there are non-causal connections (as in Newcomb problems) you need to have a world model containing non-causal connections between nodes.
(Side note: EDT is underspecified, and various attempts to fully specify it can make it equivalent to CDT or TDT or UDT, but we only found the latter two specifications after discovering TDT/UDT. It doesn’t seem very useful to me to say that EDT works well/poorly unless you better specify EDT.)
I feel like there’s this problem where when I say “look at this clear-cut case of there being non-causal connections”, some people respond “but Newcomb problems are unrealistic”, and then when I say “look at these realistic cases where there are realistically acausal connections”, others say “ah, but this is not clear cut”—and that’s what you’re doing when you say
I’m sympathetic to this claim, but hopefully you can see the thing that I’m trying to point to here, which is this: there really are scenarios where there are acausal logical connections (that we care about) in the world.
Surely you agree that information can propagate acausally, e.g. if I roll a die and write down the result in two envelopes and send one to alpha centauri and read the other after it gets there—I can learn what is in the envelope on alpha centauri “faster than light”; the physical causal separation does not affect information propagation. These things are often, but not always, related.
Similarly, the connections in the world that I care about are related to the information that I have, not to the causal connections between them. These things often correspond, but not always.
It is in this sense that CDT is doing the wrong thing: it’s not the “evaluate counterfactuals and pick the best option” part that’s the problem, it’s the “how do you construct the counterfactuals (and on what world-model)” that is the problem.
We will inevitably eventually need to use decision making heuristics, but at this point we don’t even know what we’re approximating, and We’re decidedly not looking specifically for “good decision-making heuristics” right now. We’re trying to figure out decision theory in an idealized/deterministic setting first, so that by the time we do resort to heuristics we’ll have some idea about what it is we’re trying to approximate.
I agree with this—I think the absentminded driver is a particularly clean-cut case.
I was partly trying to offer an explanation of what was going on in e.g. discussions of Newcomb’s problem where people contrast CDT with EDT. Given that you say EDT isn’t even fully specified, it seems pretty clear that they’re interpreting it as a heuristic, but I’m not sure they’re always aware of that.
Yes—nice example.
I’m not entirely convinced by this. We can evaluate heuristics by saying “how well does implementing them perform?” (which just needs us to have models of the world and of value). I certainly think we can make meaningful judgements that some heuristics are better than others without knowing what the idealised form is.
That said, I’m sympathetic to the idea that studying the idealised form might be more valuable (although I’m not certain about that). The thrust of my post arguing that understanding the heuristics is valuable was to make it clear that I was trying to clarify the fact that some people end up discussing heuristics without realising it, rather than to attack such people.