It’s not the mathematical function that determines the output of the physical system, it’s the output of the physical system that determines what mathematical function describes it.
Your brain implements some kind of decision procedure—a mathematical function—that determines what your body does. You decide to lift your left hand, after which your left hand goes up.
Sure, to an extent, reversing the direction of causality is the point
Let’s be clear on this: reversing the direction of causality is not the point, and FDT does not use backwards causation in any way. In Newcomb’s Problem, you don’t influence Omega’s model of your decision procedure in any way; you just know that if your decision procedure outputs “one-box”, then Omega’s model of you did so too. This is no different than two identical calculators outputting 4 on 2 + 2, even though there is no causal arrow from one to the other. I plan on doing a whole sequence on FDT, including a post on subjunctive dependence, btw.
If reversing the direction of causality was the point even a little bit, I would not be taking FDT so seriously as I do.
Your brain implements some kind of decision procedure—a mathematical function—that determines what your body does.
My brain implements a physical computation that determines what my body does. We can make up a bunch of counterfactuals for what this physical computation would have been, had it been given different inputs, and define a mathematical function to be this. However, that mathematical function is determined by the counterfactual outputs, rather than determining the counterfactual outputs.
A core part of Pearl’s paradigm is how he has defined the way to go from a causal graph to a set of observations of variables. This definition pretty much defines causality, and serves as core for further reasoning about it.
Logical causality lacks the equivalent, a way to go from a causal logical graph to a set of truth values for propositions. I have some basic solution for this problem but they are all much less powerful than what people want out of logical causality. In particular it is trivially solvable if one considers computational causality instead of logical causality.
Most people advocating for logical causality seems to disregard this approach, and instead want to define logical causality purely in terms of logical counterfactuals (whereas usually the Pearlian approach would define counterfactuals in terms of causality). I don’t see the reason to expect this to work.
I guess I have several problems with logical causality/FDT/LDT.
First, there’s a distinction between “efficient algorithm”, “algorithm”, “constructive/intuitionistic function” and “(classical) mathematical function”. Suppose someone tells me to implement a squaring function, so I then write some code for arithmetic, and have a program output some squares. In this case, one can sooort of say that the mathematical function of “squaring” causally influences the output, at least as a fairly accurate abstract approximation. But I wouldn’t be able to implement the vast majority of mathematical functions, so it is a pretty questionable frame. As you go further down the hierarchy towards “efficient algorithm”, it becomes more viable to implement, as issues such as “this function cannot be defined with any finite amount of information” or “I cannot think of any algorithm to implement this function” dissipate. I have much less problem with notions like “computational causality” (or alternatively a sort of logical-semantic causality that I’ve come up with a definition for which I’ve been thinking of whether to write a LW post about, leaning towards no because LDT-type stuff seems like a dead end).
However, even insofar as we grant the above process as implying a logical causality, I wasn’t created by this sort of process. I was created by some complicated history involving e.g. evolution. This complicated history doesn’t have any position where one could point to a mathematical function that was taken and implemented; instead evolution is a continuous optimization process working against a changing reality.
Finally, even if all of these problems were solved, the goal with logical causality is often not just to cover structurally identical decision procedures, but also logically “isomorphic” things, for an extremely broad definition of “isomorphic” covering e.g. “proof searches about X” as well as X itself. But computationally, proof searches are very distinct from from the objects they are reasoning about.
Thanks for your reply.
Your brain implements some kind of decision procedure—a mathematical function—that determines what your body does. You decide to lift your left hand, after which your left hand goes up.
Let’s be clear on this: reversing the direction of causality is not the point, and FDT does not use backwards causation in any way. In Newcomb’s Problem, you don’t influence Omega’s model of your decision procedure in any way; you just know that if your decision procedure outputs “one-box”, then Omega’s model of you did so too. This is no different than two identical calculators outputting 4 on 2 + 2, even though there is no causal arrow from one to the other. I plan on doing a whole sequence on FDT, including a post on subjunctive dependence, btw.
If reversing the direction of causality was the point even a little bit, I would not be taking FDT so seriously as I do.
My brain implements a physical computation that determines what my body does. We can make up a bunch of counterfactuals for what this physical computation would have been, had it been given different inputs, and define a mathematical function to be this. However, that mathematical function is determined by the counterfactual outputs, rather than determining the counterfactual outputs.
I don’t follow.
Exactly, a decision procedure. Which is an implementation of a mathematical function, and that’s what FDT is talking about.
I guess to simplify the objection:
A core part of Pearl’s paradigm is how he has defined the way to go from a causal graph to a set of observations of variables. This definition pretty much defines causality, and serves as core for further reasoning about it.
Logical causality lacks the equivalent, a way to go from a causal logical graph to a set of truth values for propositions. I have some basic solution for this problem but they are all much less powerful than what people want out of logical causality. In particular it is trivially solvable if one considers computational causality instead of logical causality.
Most people advocating for logical causality seems to disregard this approach, and instead want to define logical causality purely in terms of logical counterfactuals (whereas usually the Pearlian approach would define counterfactuals in terms of causality). I don’t see the reason to expect this to work.
I guess I have several problems with logical causality/FDT/LDT.
First, there’s a distinction between “efficient algorithm”, “algorithm”, “constructive/intuitionistic function” and “(classical) mathematical function”. Suppose someone tells me to implement a squaring function, so I then write some code for arithmetic, and have a program output some squares. In this case, one can sooort of say that the mathematical function of “squaring” causally influences the output, at least as a fairly accurate abstract approximation. But I wouldn’t be able to implement the vast majority of mathematical functions, so it is a pretty questionable frame. As you go further down the hierarchy towards “efficient algorithm”, it becomes more viable to implement, as issues such as “this function cannot be defined with any finite amount of information” or “I cannot think of any algorithm to implement this function” dissipate. I have much less problem with notions like “computational causality” (or alternatively a sort of logical-semantic causality that I’ve come up with a definition for which I’ve been thinking of whether to write a LW post about, leaning towards no because LDT-type stuff seems like a dead end).
However, even insofar as we grant the above process as implying a logical causality, I wasn’t created by this sort of process. I was created by some complicated history involving e.g. evolution. This complicated history doesn’t have any position where one could point to a mathematical function that was taken and implemented; instead evolution is a continuous optimization process working against a changing reality.
Finally, even if all of these problems were solved, the goal with logical causality is often not just to cover structurally identical decision procedures, but also logically “isomorphic” things, for an extremely broad definition of “isomorphic” covering e.g. “proof searches about X” as well as X itself. But computationally, proof searches are very distinct from from the objects they are reasoning about.