If AGI just means, “can, in principle, solve any problem” then I think we could already build very very slow AGI right now (at least for all well-defined solutions—you just perform a search over candidate solutions).
Plus, I don’t think my definition matches the definition given by Bostrom.
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc.
I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant.
We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.
If AGI just means, “can, in principle, solve any problem” then I think we could already build very very slow AGI right now (at least for all well-defined solutions—you just perform a search over candidate solutions).
Plus, I don’t think my definition matches the definition given by Bostrom.
ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc.
I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant.
We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.