“I’ve seen people argue that AGI will never exist, and even if we can get an AI to do everything a human can do, that won’t be “true” general intelligence. I’ve seen people say that Gato is a general intelligence, and we are living in a post-AGI world as I type this. Both of these people may make the exact same practical predictions on what the next few years will look like, but will give totally different answers when asked about AGI timelines!”
This is an amazingly good point. It’s also made me realise that I don’t have a solid definition of what “AGI” means to me either. More importantly, coming up with a definition would not solve the general case—even if I had a precise definition if what I meant, I’d have to rewrite it every time I wanted to speak about AGI.
Excellent post, and I would definitely like to see more knowledgable people than I make predictions based on these definitions, such as “I wouldn’t worry about an AI that passed <Definition X> but would be very worried about one that passed <Definition Y>” or ” I think we’re 50% likely to get <Definition Z> by <Year>”.
I concur with your last paragraph, and see it as a special case of rationalist taboo (taboo “AGI”). I’d personally like to see a set of AGI timeline questions on Metaculus where only the definitions differ. I think it would be useful for the same forecasters to see how their timeline predictions vary by definition; I suspect there would be a lot of personal updating to resolve emergent inconsistencies (extrapolating from my own experience, and also from ACX prediction market posts IIRC), and it would be interesting to see how those personal updates behave in the aggregate.
“I’ve seen people argue that AGI will never exist, and even if we can get an AI to do everything a human can do, that won’t be “true” general intelligence. I’ve seen people say that Gato is a general intelligence, and we are living in a post-AGI world as I type this. Both of these people may make the exact same practical predictions on what the next few years will look like, but will give totally different answers when asked about AGI timelines!”
This is an amazingly good point. It’s also made me realise that I don’t have a solid definition of what “AGI” means to me either. More importantly, coming up with a definition would not solve the general case—even if I had a precise definition if what I meant, I’d have to rewrite it every time I wanted to speak about AGI.
Excellent post, and I would definitely like to see more knowledgable people than I make predictions based on these definitions, such as “I wouldn’t worry about an AI that passed <Definition X> but would be very worried about one that passed <Definition Y>” or ” I think we’re 50% likely to get <Definition Z> by <Year>”.
I concur with your last paragraph, and see it as a special case of rationalist taboo (taboo “AGI”). I’d personally like to see a set of AGI timeline questions on Metaculus where only the definitions differ. I think it would be useful for the same forecasters to see how their timeline predictions vary by definition; I suspect there would be a lot of personal updating to resolve emergent inconsistencies (extrapolating from my own experience, and also from ACX prediction market posts IIRC), and it would be interesting to see how those personal updates behave in the aggregate.