I’m puzzled by this concern. Is the doctrine of expected utility plagued by a corresponding ‘implausible discontinuity’ problem because if action 1 has expected value .999 and action 2 has expected value 1, then you should take action 2, but a very small change could mean you should take action 1? Is CDT plagued by an implausible-discontinuity problem because two problems which EDT would treat as the same will differ in causal expected value, and there must be some in-between problem where uncertainty about the causal structure balances between the two options, so CDT’s recommendation implausibly makes a sharp shift when the uncertainty is jiggled a little? Can’t we similarly boggle at the implausibility that a tiny change in the physical structure of a problem should make such a large difference in the causal structure so as to change CDT’s recommendation? (For example, the tiny change can be a small adjustment to the coin which determines which of two causal structures will be in play, with no overall change in the evidential structure.)
It seems like what you find implausible about FDT here has nothing to do with discontinuity, unless you find CDT and EDT similarly implausible.
FDT is deeply indeterminate
This is obviously a big challenge for FDT; we don’t know what logical counterfactuals look like, and invoking them is problematic until we do.
However, I can point to some toy models of FDT which lend credence to the idea that there’s something there. The most interesting may be MUDT (see the “modal UDT” section of this summary post). This decision theory uses the notion of “possible” from the modal logic of provability, so that despite being a deterministic agent and therefore only taking one particular action in fact, agents have a well-defined possible-world structure to consider in making decisions, derived from what they can prove.
I have a post planned that focuses on a different toy model, single-player extensive-form games. This has the advantage of being only as exotic as standard game theory.
In both of these cases, FDT can be well-specified (at least, to the extent we’re satisfied with calling the toy DTs examples of FDT—which is a bit awkward, since FDT is kind of a weird umbrella term for several possible DTs, but also kind of specifically supposed to use functional graphs, which MUDT doesn’t use).
It bears mentioning that a Bayesian already regards the probability distribution representing a problem to be deeply indeterminate, so this seems less bad if you start from such a perspective. Logical counterfactuals can similarly be thought of as subjective objects, rather than some objective fact which we have to uncover in order to know what FDT does.
On the other hand, greater indeterminacy is still worse; just because we already have lots of degrees of freedom to mess ourselves up with doesn’t mean we happily accept even more.
And in general, it seems to me, there’s no fact of the matter about which algorithm a physical process is implementing in the absence of a particular interpretation of the inputs and outputs of that physical process.
Part of the reason that I’m happy for FDT to need such a fact is that I think I need such a fact anyway, in order to deal with anthropic uncertainty, and other issues.
If you don’t think there’s such a fact, then you can’t take a computationalist perspective on theory of mind—in which case, I wonder what position you take on questions such as consciousness. Obviously this leads to a number of questions which are quite aside from the point at hand, but I would personally think that questions such as whether an organism is experiencing suffering have to do with what computations are occurring. This ultimately cashes out to physical facts, yes, but it seems as if suffering should be a fundamentally computational fact which cashes out in terms of physical facts only in a substrate-independent way (ie, the physical facts of importance are precisely those which pertain to the question of which computation is running).
But almost all accounts of computation in physical processes have the issue that very many physical processes are running very many different algorithms, all at the same time.
Indeed, I think this is one of the main obstacles to a satisfying account—a successful account should not have this property.
Responses to Sections V and VI:
I’m puzzled by this concern. Is the doctrine of expected utility plagued by a corresponding ‘implausible discontinuity’ problem because if action 1 has expected value .999 and action 2 has expected value 1, then you should take action 2, but a very small change could mean you should take action 1? Is CDT plagued by an implausible-discontinuity problem because two problems which EDT would treat as the same will differ in causal expected value, and there must be some in-between problem where uncertainty about the causal structure balances between the two options, so CDT’s recommendation implausibly makes a sharp shift when the uncertainty is jiggled a little? Can’t we similarly boggle at the implausibility that a tiny change in the physical structure of a problem should make such a large difference in the causal structure so as to change CDT’s recommendation? (For example, the tiny change can be a small adjustment to the coin which determines which of two causal structures will be in play, with no overall change in the evidential structure.)
It seems like what you find implausible about FDT here has nothing to do with discontinuity, unless you find CDT and EDT similarly implausible.
This is obviously a big challenge for FDT; we don’t know what logical counterfactuals look like, and invoking them is problematic until we do.
However, I can point to some toy models of FDT which lend credence to the idea that there’s something there. The most interesting may be MUDT (see the “modal UDT” section of this summary post). This decision theory uses the notion of “possible” from the modal logic of provability, so that despite being a deterministic agent and therefore only taking one particular action in fact, agents have a well-defined possible-world structure to consider in making decisions, derived from what they can prove.
I have a post planned that focuses on a different toy model, single-player extensive-form games. This has the advantage of being only as exotic as standard game theory.
In both of these cases, FDT can be well-specified (at least, to the extent we’re satisfied with calling the toy DTs examples of FDT—which is a bit awkward, since FDT is kind of a weird umbrella term for several possible DTs, but also kind of specifically supposed to use functional graphs, which MUDT doesn’t use).
It bears mentioning that a Bayesian already regards the probability distribution representing a problem to be deeply indeterminate, so this seems less bad if you start from such a perspective. Logical counterfactuals can similarly be thought of as subjective objects, rather than some objective fact which we have to uncover in order to know what FDT does.
On the other hand, greater indeterminacy is still worse; just because we already have lots of degrees of freedom to mess ourselves up with doesn’t mean we happily accept even more.
Part of the reason that I’m happy for FDT to need such a fact is that I think I need such a fact anyway, in order to deal with anthropic uncertainty, and other issues.
If you don’t think there’s such a fact, then you can’t take a computationalist perspective on theory of mind—in which case, I wonder what position you take on questions such as consciousness. Obviously this leads to a number of questions which are quite aside from the point at hand, but I would personally think that questions such as whether an organism is experiencing suffering have to do with what computations are occurring. This ultimately cashes out to physical facts, yes, but it seems as if suffering should be a fundamentally computational fact which cashes out in terms of physical facts only in a substrate-independent way (ie, the physical facts of importance are precisely those which pertain to the question of which computation is running).
Indeed, I think this is one of the main obstacles to a satisfying account—a successful account should not have this property.