Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?
Anthropically, UDT suggests that a variant of SIA should be used [EDIT—depending on your ethics]. I’m not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.
Anthropically, UDT suggests that a variant of SIA should be used.
Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.
Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?
Anthropically, UDT suggests that a variant of SIA should be used [EDIT—depending on your ethics]. I’m not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.
Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.