And maybe I am misremembering history or confused about what you are referring to, but in my mind, the promise of the “AGI community” has always been (implicitly or explicitly) that if you call something “human-level AGI”, it should be able to get you to (a), or at least have a bigger economic and societal impact than currently-deployed AI systems have actually had so far.
Yeah, I don’t disagree with this—there’s a question here about which stories about AGI should be thought of as defining vs extrapolating consequences of that definition based on a broader set of assumptions. The situation we’re in right now, as I see it, is one where some of the broader assumptions turn out to be false, so definitions which seemed relatively clear become more ambiguous.
I’m privileging notions about the capabilities over notions about societal consequences, partly because I see “AGI” as more of a technology-oriented term and less of a social-consequences-oriented term. So while I would agree that talk about AGI from within the AGI community historically often went along with utopian visions, I pretty strongly think of this as speculation about impact, rather than definitional.
Yeah, I don’t disagree with this—there’s a question here about which stories about AGI should be thought of as defining vs extrapolating consequences of that definition based on a broader set of assumptions. The situation we’re in right now, as I see it, is one where some of the broader assumptions turn out to be false, so definitions which seemed relatively clear become more ambiguous.
I’m privileging notions about the capabilities over notions about societal consequences, partly because I see “AGI” as more of a technology-oriented term and less of a social-consequences-oriented term. So while I would agree that talk about AGI from within the AGI community historically often went along with utopian visions, I pretty strongly think of this as speculation about impact, rather than definitional.