I notice I actually agree with you—if we did try, using mathematics, to implement agents who decide and predict in the manner you describe, we’d find it incorrect to describe these agents as causal decision theory agents. In fact, I also expect we’d find ourselves disillusioned with CDT in general, and if philosophers brought it up, we’d direct them to instead engage with the much more interesting agents we’ve mathematically formalised.
The notion that the actions should be chosen based on consequences—as expressed in the formula here—is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.
Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there’s no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can’t collapse logical consequences and physical causality into one thing in the latter case.
edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there’s either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever.
edit: Good analogy, ‘twin paradox’ in special relativity. There’s only paradox if nobody done the math right.
I notice I actually agree with you—if we did try, using mathematics, to implement agents who decide and predict in the manner you describe, we’d find it incorrect to describe these agents as causal decision theory agents. In fact, I also expect we’d find ourselves disillusioned with CDT in general, and if philosophers brought it up, we’d direct them to instead engage with the much more interesting agents we’ve mathematically formalised.
Well, each philosopher’s understanding of CDT seem to differ from the other:
http://www.public.asu.edu/~armendtb/docs/A%20Foundation%20for%20Causal%20Decision%20Theory.pdf
The notion that the actions should be chosen based on consequences—as expressed in the formula here—is perfectly fine, albeit incredibly trivial. Can formalize that all the way into agent. Written such agents myself. Still need a symbol to describe this type of agent.
But philosophers go from this to “my actions should be chosen based on consequences”, and it is all about the true meaning of self and falls within the purview of your conundrums of philosophy .
Having 1 computer control 2 robots arms wired in parallel, and having 2 computers running exact same software as before, controlling 2 robot arms, there’s no difference for software engineering, its a minor detail that has been entirely abstracted from software. There is difference for philosophizing thought because you can’t collapse logical consequences and physical causality into one thing in the latter case.
edit: anyhow. to summarize my point: In terms of agents actually formalized in software, one-boxing is only a matter of implementing predictor into world model somehow, either as second servo controlled by same control variables, or as uncertain world state outside the senses (in the unseen there’s either real world or simulator that affects real world via hand of predictor). No conceptual problems what so ever. edit: Good analogy, ‘twin paradox’ in special relativity. There’s only paradox if nobody done the math right.