what are the non-causal, logical consequences of building a CDT AGI?
As stated elsewhere in these comments, I think multiverse cooperation is pretty significant and important. And of course, I am also just concerned with normal Newcomblike dilemmas which might occur around the development of AI, when we can actually run its code to predict its behavior. On the other hand, there seems to me to be no upside to running CDT rather than FDT, conditional on us solving all of the problems with FDT.
As stated elsewhere in these comments, I think multiverse cooperation is pretty significant and important. And of course, I am also just concerned with normal Newcomblike dilemmas which might occur around the development of AI, when we can actually run its code to predict its behavior. On the other hand, there seems to me to be no upside to running CDT rather than FDT, conditional on us solving all of the problems with FDT.