So, superintelligence. I would suggest editing your prediction to say so. They’re not synonymous terms. In fact it is the full expectation that AGI in many architectures would be less efficient without extensive training. AGI is a statement of capability—it can, in principle, solve any problem, not that it does so better than humans.
If AGI just means, “can, in principle, solve any problem” then I think we could already build very very slow AGI right now (at least for all well-defined solutions—you just perform a search over candidate solutions).
Plus, I don’t think my definition matches the definition given by Bostrom.
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc.
I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant.
We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.
Well, even people working on AGI don’t think that is a possibility. I think the word you are looking for is “superintelligence” not AGI.
I’m using a slighly modified definition given by Grace et al. for high level machine intelligence.
So, superintelligence. I would suggest editing your prediction to say so. They’re not synonymous terms. In fact it is the full expectation that AGI in many architectures would be less efficient without extensive training. AGI is a statement of capability—it can, in principle, solve any problem, not that it does so better than humans.
If AGI just means, “can, in principle, solve any problem” then I think we could already build very very slow AGI right now (at least for all well-defined solutions—you just perform a search over candidate solutions).
Plus, I don’t think my definition matches the definition given by Bostrom.
ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc.
I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant.
We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.