It could just be that you have a preference for CDT, as you wrote “Son-of-CDT is the agent with the source code such that running that source code is the best way to convert [energy + hardware + actuators + sensors] into utility.” This is not true if you consider logical counterfactuals. But if you were only concerned about affecting the future via analyzing causal counterfactuals, then what you wrote would be accurate.
Personally, I think FDT performs better, not simply because I’d want to precommit to being FDT, but instead because I think it is better philosophically to consider logical counterfactuals rather than causal counterfactuals.
You’re taking issue with my evaluating the causal consequences of our choice of what program to run in the agent rather than the logical consequences? These should be the same in practice when we make an AGI, since we’re not in some weird decision problem at the moment, so far as I can tell. Or if you think I’m missing something, what are the non-causal, logical consequences of building a CDT AGI?
what are the non-causal, logical consequences of building a CDT AGI?
As stated elsewhere in these comments, I think multiverse cooperation is pretty significant and important. And of course, I am also just concerned with normal Newcomblike dilemmas which might occur around the development of AI, when we can actually run its code to predict its behavior. On the other hand, there seems to me to be no upside to running CDT rather than FDT, conditional on us solving all of the problems with FDT.
It could just be that you have a preference for CDT, as you wrote “Son-of-CDT is the agent with the source code such that running that source code is the best way to convert [energy + hardware + actuators + sensors] into utility.” This is not true if you consider logical counterfactuals. But if you were only concerned about affecting the future via analyzing causal counterfactuals, then what you wrote would be accurate.
Personally, I think FDT performs better, not simply because I’d want to precommit to being FDT, but instead because I think it is better philosophically to consider logical counterfactuals rather than causal counterfactuals.
You’re taking issue with my evaluating the causal consequences of our choice of what program to run in the agent rather than the logical consequences? These should be the same in practice when we make an AGI, since we’re not in some weird decision problem at the moment, so far as I can tell. Or if you think I’m missing something, what are the non-causal, logical consequences of building a CDT AGI?
As stated elsewhere in these comments, I think multiverse cooperation is pretty significant and important. And of course, I am also just concerned with normal Newcomblike dilemmas which might occur around the development of AI, when we can actually run its code to predict its behavior. On the other hand, there seems to me to be no upside to running CDT rather than FDT, conditional on us solving all of the problems with FDT.