Maybe a better question than “time to AGI” is time to mundanely transformative AGI. I think a lot of people have a model of the near future in which a lot of current knowledge work (and other work) is fully or almost-fully automated, but at least as of right this moment, that hasn’t actually happened yet (despite all the hype).
For example, one of the things current A(G)Is are supposedly strongest at is writing code, but I would still rather hire a (good) junior software developer than rely on currently available AI products for just about any real programming task, and it’s not a particularly close call. I do think there’s a pretty high likelihood that this will change imminently as products like Devin improve and get more widely deployed, but it seems worth noting (and finding a term for) the fact that this kind of automation so far (mostly) hasn’t actually happened yet, aside from certain customer support and copyediting jobs.
I think when someone asks “what is your time to AGI”, they’re usually asking about when you expect either (a) AI to radically transform the economy and potentially usher in a golden age of prosperity and post-scarcity or (b) the world to end.
And maybe I am misremembering history or confused about what you are referring to, but in my mind, the promise of the “AGI community” has always been (implicitly or explicitly) that if you call something “human-level AGI”, it should be able to get you to (a), or at least have a bigger economic and societal impact than currently-deployed AI systems have actually had so far. (Rightly or wrongly, the ballooning stock prices of AI and semiconductor companies seem to be mostly an expectation of earnings and impact from in-development and future products, rather than expected future revenues from wider rollout of any existing products in their current form.)
And maybe I am misremembering history or confused about what you are referring to, but in my mind, the promise of the “AGI community” has always been (implicitly or explicitly) that if you call something “human-level AGI”, it should be able to get you to (a), or at least have a bigger economic and societal impact than currently-deployed AI systems have actually had so far.
Yeah, I don’t disagree with this—there’s a question here about which stories about AGI should be thought of as defining vs extrapolating consequences of that definition based on a broader set of assumptions. The situation we’re in right now, as I see it, is one where some of the broader assumptions turn out to be false, so definitions which seemed relatively clear become more ambiguous.
I’m privileging notions about the capabilities over notions about societal consequences, partly because I see “AGI” as more of a technology-oriented term and less of a social-consequences-oriented term. So while I would agree that talk about AGI from within the AGI community historically often went along with utopian visions, I pretty strongly think of this as speculation about impact, rather than definitional.
Maybe a better question than “time to AGI” is time to mundanely transformative AGI. I think a lot of people have a model of the near future in which a lot of current knowledge work (and other work) is fully or almost-fully automated, but at least as of right this moment, that hasn’t actually happened yet (despite all the hype).
For example, one of the things current A(G)Is are supposedly strongest at is writing code, but I would still rather hire a (good) junior software developer than rely on currently available AI products for just about any real programming task, and it’s not a particularly close call. I do think there’s a pretty high likelihood that this will change imminently as products like Devin improve and get more widely deployed, but it seems worth noting (and finding a term for) the fact that this kind of automation so far (mostly) hasn’t actually happened yet, aside from certain customer support and copyediting jobs.
I think when someone asks “what is your time to AGI”, they’re usually asking about when you expect either (a) AI to radically transform the economy and potentially usher in a golden age of prosperity and post-scarcity or (b) the world to end.
And maybe I am misremembering history or confused about what you are referring to, but in my mind, the promise of the “AGI community” has always been (implicitly or explicitly) that if you call something “human-level AGI”, it should be able to get you to (a), or at least have a bigger economic and societal impact than currently-deployed AI systems have actually had so far. (Rightly or wrongly, the ballooning stock prices of AI and semiconductor companies seem to be mostly an expectation of earnings and impact from in-development and future products, rather than expected future revenues from wider rollout of any existing products in their current form.)
Yeah, I don’t disagree with this—there’s a question here about which stories about AGI should be thought of as defining vs extrapolating consequences of that definition based on a broader set of assumptions. The situation we’re in right now, as I see it, is one where some of the broader assumptions turn out to be false, so definitions which seemed relatively clear become more ambiguous.
I’m privileging notions about the capabilities over notions about societal consequences, partly because I see “AGI” as more of a technology-oriented term and less of a social-consequences-oriented term. So while I would agree that talk about AGI from within the AGI community historically often went along with utopian visions, I pretty strongly think of this as speculation about impact, rather than definitional.