Besides implementation details, what differences are there between rationalists’ conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God?
We could distinguish belief in something with hope that it will exist. For example, one could hope that they won’t get a disease without committing to the belief that they won’t get that disease.
If by “rationalist conception of a benevolent AGI” you are referring to a belief that such an entity will come into existence, then I think one of the primary differences between this and the monotheistic conception of God, is that rationalists don’t necessarily claim that such a benevolent entity will come into existence. At most, they claim it would simply be good if one (or many) were developed. But it does not seem inevitable, hence the efforts to ensure that AI is developed safely.
We could distinguish belief in something with hope that it will exist. For example, one could hope that they won’t get a disease without committing to the belief that they won’t get that disease.
If by “rationalist conception of a benevolent AGI” you are referring to a belief that such an entity will come into existence, then I think one of the primary differences between this and the monotheistic conception of God, is that rationalists don’t necessarily claim that such a benevolent entity will come into existence. At most, they claim it would simply be good if one (or many) were developed. But it does not seem inevitable, hence the efforts to ensure that AI is developed safely.
That’s a good distinction on hope something will exist vs belief that something exists! Thanks.