I think this mostly just reveals that “AGI” and “human-level” are bad terms.
Under your proposed usage, modern transformers are (IMO) brutally non-central with respect to the terms “AGI” and “human-level” from the perspective of most people.
Unfortunately, I don’t think there is any defintion of “AGI” and “human-level” which:
Corresponds to the words used.
Also is central from the perspective of most people hearing the words
I prefer the term “transformative AI”, ideally paired with a definition.
I don’t think of it as “AGI” or “human-level” being an especially bad term—most category nouns are bad terms (like “heap”), in the sense that they’re inherently fuzzy gestures at the structure of the world. It’s just that in the context of 2024, we’re now inside the fuzz.
A mile away from your house, “towards your house” is a useful direction. Inside your front hallway, “towards your house” is a uselessly fuzzy direction—and a bad term. More precision is needed because you’re closer.
Yeah, I think nixing the terms ‘AGI’ and ‘human-level’ is a very reasonable response to my argument. I don’t claim that “we are at human-level AGI now, everyone!” has important policy implications (I am not sure one way or the other, but it is certainly not my point).
‘Superintelligence’ seems more fitting than AGI for the ‘transformative’ scope. The problem with “transformative AI” as a term is that subdomain transformation will occur at staggered rates. We saw text based generation reach thresholds that it took several years to reach for video just recently, as an example.
I don’t love ‘superintelligence’ as a term, and even less as a goal post (I’d much rather be in a world aiming for AI ‘superwisdom’), but of the commonly used terms it seems the best fit for what people are trying to describe when they describe an AI generalized and sophisticated enough to be “at or above maximal human competency in most things.”
The OP post, at least to me, seems correct in that AGI as a term belongs to its foundations as a differentiator from narrow scoped competencies in AI, and that the lines for generalization are sufficiently blurred at this point with transformers we should stop moving the goal posts for the ‘G’ in AGI. And at least from what I’ve seen, there’s active harm in the industry where ‘AGI’ as some far future development leads people less up to date with research on things like world models or prompting to conclude that GPTs are “just Markov predictions” (overlooking the importance of the self-attention mechanism and the surprising results of its presence on the degree of generalization).
I would wager the vast majority of consumers of models underestimate the generalization present because in addition to their naive usage of outdated free models they’ve been reading article after article about how it’s “not AGI” and is “just fancy autocomplete” (reflecting a separate phenomenon where it seems professional writers are more inclined to write negative articles about a technology perceived as a threat to writing jobs than positive articles).
As this topic becomes more important, it might be useful for democracies to have a more accurately informed broader public, and AGI as a moving goal post seems counterproductive to those aims.
To me, superintelligence implies qualitatively much smarter than the best humans. I don’t think this is needed for AI to be transformative. Fast and cheap-to-run AIs which are as qualitatively smart as humans would likely be transformative.
I think this mostly just reveals that “AGI” and “human-level” are bad terms.
Under your proposed usage, modern transformers are (IMO) brutally non-central with respect to the terms “AGI” and “human-level” from the perspective of most people.
Unfortunately, I don’t think there is any defintion of “AGI” and “human-level” which:
Corresponds to the words used.
Also is central from the perspective of most people hearing the words
I prefer the term “transformative AI”, ideally paired with a definition.
(E.g. in The case for ensuring that powerful AIs are controlled, we use the terms “transformatively useful AI” and “early tranformatively useful AI” both of which we define. We were initially planning on some term like “human-level”, but we ran into a bunch of issues with using this term due to wanting a more precise concept and thus instead used a concept like not-wildly-qualitatively-superhuman-in-dangerous-domains or non-wildly-qualitatively-superhuman-in-general-relevant-capabilities.)
I should probably taboo human-level more than I currently do, this term is problematic.
I also like “transformative AI.”
I don’t think of it as “AGI” or “human-level” being an especially bad term—most category nouns are bad terms (like “heap”), in the sense that they’re inherently fuzzy gestures at the structure of the world. It’s just that in the context of 2024, we’re now inside the fuzz.
A mile away from your house, “towards your house” is a useful direction. Inside your front hallway, “towards your house” is a uselessly fuzzy direction—and a bad term. More precision is needed because you’re closer.
This is an excellent short mental handle for this concept. I’ll definitely be using it.
Yeah, I think nixing the terms ‘AGI’ and ‘human-level’ is a very reasonable response to my argument. I don’t claim that “we are at human-level AGI now, everyone!” has important policy implications (I am not sure one way or the other, but it is certainly not my point).
‘Superintelligence’ seems more fitting than AGI for the ‘transformative’ scope. The problem with “transformative AI” as a term is that subdomain transformation will occur at staggered rates. We saw text based generation reach thresholds that it took several years to reach for video just recently, as an example.
I don’t love ‘superintelligence’ as a term, and even less as a goal post (I’d much rather be in a world aiming for AI ‘superwisdom’), but of the commonly used terms it seems the best fit for what people are trying to describe when they describe an AI generalized and sophisticated enough to be “at or above maximal human competency in most things.”
The OP post, at least to me, seems correct in that AGI as a term belongs to its foundations as a differentiator from narrow scoped competencies in AI, and that the lines for generalization are sufficiently blurred at this point with transformers we should stop moving the goal posts for the ‘G’ in AGI. And at least from what I’ve seen, there’s active harm in the industry where ‘AGI’ as some far future development leads people less up to date with research on things like world models or prompting to conclude that GPTs are “just Markov predictions” (overlooking the importance of the self-attention mechanism and the surprising results of its presence on the degree of generalization).
I would wager the vast majority of consumers of models underestimate the generalization present because in addition to their naive usage of outdated free models they’ve been reading article after article about how it’s “not AGI” and is “just fancy autocomplete” (reflecting a separate phenomenon where it seems professional writers are more inclined to write negative articles about a technology perceived as a threat to writing jobs than positive articles).
As this topic becomes more important, it might be useful for democracies to have a more accurately informed broader public, and AGI as a moving goal post seems counterproductive to those aims.
To me, superintelligence implies qualitatively much smarter than the best humans. I don’t think this is needed for AI to be transformative. Fast and cheap-to-run AIs which are as qualitatively smart as humans would likely be transformative.
Agreed—I thought you wanted that term for replacing how OP stated AGI is being used in relation to x-risk.
In terms of “fast and cheap and comparable to the average human”—well, then for a number of roles and niches we’re already there.
Sticking with the intent behind your term, maybe “generally transformative AI” is a more accurate representation for a colloquial ‘AGI’ replacement?
Oh, by “as qualitatively smart as humans” I meant “as qualitatively smart as the best human experts”.
I also maybe disagree with:
Or at least the % of economic activity covered by this still seems low to me.
Oh, by “as qualitatively smart as humans” I meant “as qualitatively smart as the best human experts”.
I think that is more comparable to saying “as smart as humanity.” No individual human is as smart as humanity in general.