I’m confused. Causal decision theory was invented or formalised almost entirely by philosophers. It takes the ‘causal’ in its name from its reliance on inductive logic and inference. It doesn’t make sense to claim that philosophers are being sloppy about the word ‘causal’ here, and claiming that causal decision theory will accept backwards causality and one-box is patently false unless you mean something other than what the symbol ‘causal decision theory’ refers to when you say ‘causal decision theory’.
Firstly, the notion that the actions should be chosen based on their consequences, taking the actions as cause of the consequences, was definitely not invented by philosophers. Secondarily, the logical causality is not identical to physical causality (the latter is dependent on specific laws of physics). Thirdly, not all philosophers are sloppy; some are very sloppy some are less sloppy. Fourth, anything that was not put in mathematical form to be manipulated using formal methods, is not formalized. When you formalize stuff you end up stripping notion of self unless explicitly included as part of formalism, stripping notion of the time where the math is working unless explicitly included as part of formalism, and so on, ending up without the problem.
Maybe you are correct; it is better to let symbol ‘causal decision theory’ to refer to confused philosophy. Then we would need some extra symbol for how the agents implementable using mathematics actually decide (and how robots that predict outcomes of their actions on a world model actually work), which is very very similar to ‘causal decision theory’ sans all the human preconditions of what self is.
I notice I actually agree with you—if we did try, using mathematics, to implement agents who decide and predict in the manner you describe, we’d find it incorrect to describe these agents as causal decision theory agents. In fact, I also expect we’d find ourselves disillusioned with CDT in general, and if philosophers brought it up, we’d direct them to instead engage with the much more interesting agents we’ve mathematically formalised.
The notion that the actions should be chosen based on consequences—as expressed in the formula here—is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.
Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there’s no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can’t collapse logical consequences and physical causality into one thing in the latter case.
edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there’s either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever.
edit: Good analogy, ‘twin paradox’ in special relativity. There’s only paradox if nobody done the math right.
I’m confused. Causal decision theory was invented or formalised almost entirely by philosophers. It takes the ‘causal’ in its name from its reliance on inductive logic and inference. It doesn’t make sense to claim that philosophers are being sloppy about the word ‘causal’ here, and claiming that causal decision theory will accept backwards causality and one-box is patently false unless you mean something other than what the symbol ‘causal decision theory’ refers to when you say ‘causal decision theory’.
Firstly, the notion that the actions should be chosen based on their consequences, taking the actions as cause of the consequences, was definitely not invented by philosophers. Secondarily, the logical causality is not identical to physical causality (the latter is dependent on specific laws of physics). Thirdly, not all philosophers are sloppy; some are very sloppy some are less sloppy. Fourth, anything that was not put in mathematical form to be manipulated using formal methods, is not formalized. When you formalize stuff you end up stripping notion of self unless explicitly included as part of formalism, stripping notion of the time where the math is working unless explicitly included as part of formalism, and so on, ending up without the problem.
Maybe you are correct; it is better to let symbol ‘causal decision theory’ to refer to confused philosophy. Then we would need some extra symbol for how the agents implementable using mathematics actually decide (and how robots that predict outcomes of their actions on a world model actually work), which is very very similar to ‘causal decision theory’ sans all the human preconditions of what self is.
I notice I actually agree with you—if we did try, using mathematics, to implement agents who decide and predict in the manner you describe, we’d find it incorrect to describe these agents as causal decision theory agents. In fact, I also expect we’d find ourselves disillusioned with CDT in general, and if philosophers brought it up, we’d direct them to instead engage with the much more interesting agents we’ve mathematically formalised.
Well, each philosopher’s understanding of CDT seem to differ from the other:
http://www.public.asu.edu/~armendtb/docs/A%20Foundation%20for%20Causal%20Decision%20Theory.pdf
The notion that the actions should be chosen based on consequences—as expressed in the formula here—is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.
But philosophers go from this to “my actions should be chosen based on consequences”, and it is all about the true meaning of self and falls within the purview of your conundrums of philosophy .
Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there’s no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can’t collapse logical consequences and physical causality into one thing in the latter case.
edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there’s either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever. edit: Good analogy, ‘twin paradox’ in special relativity. There’s only paradox if nobody done the math right.