You don’t need actual God for this line of reasoning to work, some semi-God is enough, because of the following:
The AI should put small probability that it is in the testing simulation created by higher level AI to test its moral qualities. And the test is centered around how he will care about humans.
If the AI put even smallest probability that it is true, it may overweight the utility of atoms from which humans consist, which is also not high, and so it will preserve our lives and provide us with many good things.
The similar idea also was explored by Bostrom’s in his “Hail Mary and Value porosity” paper, where hypothetical alien superintelligence plays the role of such judge.
You don’t need actual God for this line of reasoning to work, some semi-God is enough, because of the following:
The AI should put small probability that it is in the testing simulation created by higher level AI to test its moral qualities. And the test is centered around how he will care about humans.
If the AI put even smallest probability that it is true, it may overweight the utility of atoms from which humans consist, which is also not high, and so it will preserve our lives and provide us with many good things.
The similar idea also was explored by Bostrom’s in his “Hail Mary and Value porosity” paper, where hypothetical alien superintelligence plays the role of such judge.
Interesting. I should look into more of Bostrom’s work then.