I suppose causal decision theory assumes causality only works in one temporal direction.
That’s the popular understanding (or lack thereof) here and among philosophers in general.
Philosophers just don’t get math. If the decision theory is called causal but doesn’t itself make any references to physics, then that’s a slightly misleading name. I’ve written on that before
The math doesn’t go “hey hey, the theory is named causal therefore you can’t treat 2 robot arms controlled by 2 control computers that run one function on one state, the same as 2 robot arms controlled by 1 computer”. Confused sloppy philosophers do.
Also, the best case is to be predicted to 1-box but 2-box in reality. If the prediction works by backwards causality, well then causal decision theory one-boxes. If the prediction works by simulation, the causal decision theory can either have world model where both the value inside predictor and the value inside actual robot are represented by same action A, and 1-box, or it can have uncertainty as of whenever the world outside of it is normal reality or predictor’s simulator, where it will again one box (assuming it cares about the real money even if it is inside predictor, which it would if it needs money to pay for e.g. it’s child’s education). It will also 1-box in simulator and 2-box in reality if it can tell those apart.
I’m confused. Causal decision theory was invented or formalised almost entirely by philosophers. It takes the ‘causal’ in its name from its reliance on inductive logic and inference. It doesn’t make sense to claim that philosophers are being sloppy about the word ‘causal’ here, and claiming that causal decision theory will accept backwards causality and one-box is patently false unless you mean something other than what the symbol ‘causal decision theory’ refers to when you say ‘causal decision theory’.
Firstly, the notion that the actions should be chosen based on their consequences, taking the actions as cause of the consequences, was definitely not invented by philosophers. Secondarily, the logical causality is not identical to physical causality (the latter is dependent on specific laws of physics). Thirdly, not all philosophers are sloppy; some are very sloppy some are less sloppy. Fourth, anything that was not put in mathematical form to be manipulated using formal methods, is not formalized. When you formalize stuff you end up stripping notion of self unless explicitly included as part of formalism, stripping notion of the time where the math is working unless explicitly included as part of formalism, and so on, ending up without the problem.
Maybe you are correct; it is better to let symbol ‘causal decision theory’ to refer to confused philosophy. Then we would need some extra symbol for how the agents implementable using mathematics actually decide (and how robots that predict outcomes of their actions on a world model actually work), which is very very similar to ‘causal decision theory’ sans all the human preconditions of what self is.
I notice I actually agree with you—if we did try, using mathematics, to implement agents who decide and predict in the manner you describe, we’d find it incorrect to describe these agents as causal decision theory agents. In fact, I also expect we’d find ourselves disillusioned with CDT in general, and if philosophers brought it up, we’d direct them to instead engage with the much more interesting agents we’ve mathematically formalised.
The notion that the actions should be chosen based on consequences—as expressed in the formula here—is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.
Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there’s no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can’t collapse logical consequences and physical causality into one thing in the latter case.
edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there’s either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever.
edit: Good analogy, ‘twin paradox’ in special relativity. There’s only paradox if nobody done the math right.
That’s the popular understanding (or lack thereof) here and among philosophers in general. Philosophers just don’t get math. If the decision theory is called causal but doesn’t itself make any references to physics, then that’s a slightly misleading name. I’ve written on that before
The math doesn’t go “hey hey, the theory is named causal therefore you can’t treat 2 robot arms controlled by 2 control computers that run one function on one state, the same as 2 robot arms controlled by 1 computer”. Confused sloppy philosophers do.
Also, the best case is to be predicted to 1-box but 2-box in reality. If the prediction works by backwards causality, well then causal decision theory one-boxes. If the prediction works by simulation, the causal decision theory can either have world model where both the value inside predictor and the value inside actual robot are represented by same action A, and 1-box, or it can have uncertainty as of whenever the world outside of it is normal reality or predictor’s simulator, where it will again one box (assuming it cares about the real money even if it is inside predictor, which it would if it needs money to pay for e.g. it’s child’s education). It will also 1-box in simulator and 2-box in reality if it can tell those apart.
I’m confused. Causal decision theory was invented or formalised almost entirely by philosophers. It takes the ‘causal’ in its name from its reliance on inductive logic and inference. It doesn’t make sense to claim that philosophers are being sloppy about the word ‘causal’ here, and claiming that causal decision theory will accept backwards causality and one-box is patently false unless you mean something other than what the symbol ‘causal decision theory’ refers to when you say ‘causal decision theory’.
Firstly, the notion that the actions should be chosen based on their consequences, taking the actions as cause of the consequences, was definitely not invented by philosophers. Secondarily, the logical causality is not identical to physical causality (the latter is dependent on specific laws of physics). Thirdly, not all philosophers are sloppy; some are very sloppy some are less sloppy. Fourth, anything that was not put in mathematical form to be manipulated using formal methods, is not formalized. When you formalize stuff you end up stripping notion of self unless explicitly included as part of formalism, stripping notion of the time where the math is working unless explicitly included as part of formalism, and so on, ending up without the problem.
Maybe you are correct; it is better to let symbol ‘causal decision theory’ to refer to confused philosophy. Then we would need some extra symbol for how the agents implementable using mathematics actually decide (and how robots that predict outcomes of their actions on a world model actually work), which is very very similar to ‘causal decision theory’ sans all the human preconditions of what self is.
I notice I actually agree with you—if we did try, using mathematics, to implement agents who decide and predict in the manner you describe, we’d find it incorrect to describe these agents as causal decision theory agents. In fact, I also expect we’d find ourselves disillusioned with CDT in general, and if philosophers brought it up, we’d direct them to instead engage with the much more interesting agents we’ve mathematically formalised.
Well, each philosopher’s understanding of CDT seem to differ from the other:
http://www.public.asu.edu/~armendtb/docs/A%20Foundation%20for%20Causal%20Decision%20Theory.pdf
The notion that the actions should be chosen based on consequences—as expressed in the formula here—is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.
But philosophers go from this to “my actions should be chosen based on consequences”, and it is all about the true meaning of self and falls within the purview of your conundrums of philosophy .
Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there’s no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can’t collapse logical consequences and physical causality into one thing in the latter case.
edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there’s either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever. edit: Good analogy, ‘twin paradox’ in special relativity. There’s only paradox if nobody done the math right.