Hey Rob, on the question of God, you wrote: “This question is ‘philosophy in easy mode’, so seems like a decent proxy for field health / competence”
Saying that this is philosophy in easy mode implies that the answer is obvious, and the way you phrased it above makes it seem like atheism is obviously the correct answer.
How would you answer a question I asked about a year ago:
Besides implementation details, what differences are there between rationalists’ conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God? (source tweet)
Besides implementation details, what differences are there between rationalists’ conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God?
We could distinguish belief in something with hope that it will exist. For example, one could hope that they won’t get a disease without committing to the belief that they won’t get that disease.
If by “rationalist conception of a benevolent AGI” you are referring to a belief that such an entity will come into existence, then I think one of the primary differences between this and the monotheistic conception of God, is that rationalists don’t necessarily claim that such a benevolent entity will come into existence. At most, they claim it would simply be good if one (or many) were developed. But it does not seem inevitable, hence the efforts to ensure that AI is developed safely.
I don’t know what you meant to set aside by saying “Besides implementation details”, but it seems worth noting that the most important difference is that AGI (if it existed today) would be a naturalistic posit, not a supernatural or magical hypothesis.
To my eye, your question sounds like ‘What’s the difference between believing sorcerers exist who can conjure arbitrarily large fireballs, and believing engineers exist who can build flamethrowers?’ One is magical (seems strongly contrary to the general character of physical law, treats human-psychology-ish concepts as fundamental rather than physics-ish concepts, etc.), the other isn’t.
Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.
Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.
Maximality of those traits? I don’t think that’s empirically determinable at all, and certainly not practically measurable by humans.
One can certainly have beliefs about comparative levels of power, knowledge, and benevolence. The types of evidence for and against them should be pretty obvious under most circumstances. Evidence against those traits being greater than some particular standard is also evidence against maximality of those traits. However, evidence for reaching some particular standard is only evidence for maximality if you already believe that the standard in question is the highest that can possibly exist.
I don’t see any reason why we should believe that any standard that we can empirically determine is maximal, so I don’t think that one can rationally believe some entity to be maximal in any such trait. At best, we can have evidence that they are far beyond human capability.
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence’s origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.
I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. “There exists some entity that is omnipotent, omniscient, and omnibenevolent” is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
Hey Rob, on the question of God, you wrote: “This question is ‘philosophy in easy mode’, so seems like a decent proxy for field health / competence”
Saying that this is philosophy in easy mode implies that the answer is obvious, and the way you phrased it above makes it seem like atheism is obviously the correct answer.
How would you answer a question I asked about a year ago: Besides implementation details, what differences are there between rationalists’ conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God? (source tweet)
We could distinguish belief in something with hope that it will exist. For example, one could hope that they won’t get a disease without committing to the belief that they won’t get that disease.
If by “rationalist conception of a benevolent AGI” you are referring to a belief that such an entity will come into existence, then I think one of the primary differences between this and the monotheistic conception of God, is that rationalists don’t necessarily claim that such a benevolent entity will come into existence. At most, they claim it would simply be good if one (or many) were developed. But it does not seem inevitable, hence the efforts to ensure that AI is developed safely.
That’s a good distinction on hope something will exist vs belief that something exists! Thanks.
I don’t know what you meant to set aside by saying “Besides implementation details”, but it seems worth noting that the most important difference is that AGI (if it existed today) would be a naturalistic posit, not a supernatural or magical hypothesis.
To my eye, your question sounds like ‘What’s the difference between believing sorcerers exist who can conjure arbitrarily large fireballs, and believing engineers exist who can build flamethrowers?’ One is magical (seems strongly contrary to the general character of physical law, treats human-psychology-ish concepts as fundamental rather than physics-ish concepts, etc.), the other isn’t.
Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.
Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.
What empirical evidence would someone need to observe to believe that such an AGI, that is maximal in any of those traits, exists?
Maximality of those traits? I don’t think that’s empirically determinable at all, and certainly not practically measurable by humans.
One can certainly have beliefs about comparative levels of power, knowledge, and benevolence. The types of evidence for and against them should be pretty obvious under most circumstances. Evidence against those traits being greater than some particular standard is also evidence against maximality of those traits. However, evidence for reaching some particular standard is only evidence for maximality if you already believe that the standard in question is the highest that can possibly exist.
I don’t see any reason why we should believe that any standard that we can empirically determine is maximal, so I don’t think that one can rationally believe some entity to be maximal in any such trait. At best, we can have evidence that they are far beyond human capability.
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence’s origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.
I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. “There exists some entity that is omnipotent, omniscient, and omnibenevolent” is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
Today or someday in the future.