Maximality of those traits? I don’t think that’s empirically determinable at all, and certainly not practically measurable by humans.
One can certainly have beliefs about comparative levels of power, knowledge, and benevolence. The types of evidence for and against them should be pretty obvious under most circumstances. Evidence against those traits being greater than some particular standard is also evidence against maximality of those traits. However, evidence for reaching some particular standard is only evidence for maximality if you already believe that the standard in question is the highest that can possibly exist.
I don’t see any reason why we should believe that any standard that we can empirically determine is maximal, so I don’t think that one can rationally believe some entity to be maximal in any such trait. At best, we can have evidence that they are far beyond human capability.
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence’s origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.
I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. “There exists some entity that is omnipotent, omniscient, and omnibenevolent” is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
What empirical evidence would someone need to observe to believe that such an AGI, that is maximal in any of those traits, exists?
Maximality of those traits? I don’t think that’s empirically determinable at all, and certainly not practically measurable by humans.
One can certainly have beliefs about comparative levels of power, knowledge, and benevolence. The types of evidence for and against them should be pretty obvious under most circumstances. Evidence against those traits being greater than some particular standard is also evidence against maximality of those traits. However, evidence for reaching some particular standard is only evidence for maximality if you already believe that the standard in question is the highest that can possibly exist.
I don’t see any reason why we should believe that any standard that we can empirically determine is maximal, so I don’t think that one can rationally believe some entity to be maximal in any such trait. At best, we can have evidence that they are far beyond human capability.
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence’s origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.
I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. “There exists some entity that is omnipotent, omniscient, and omnibenevolent” is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
Today or someday in the future.