Boundaries are fuzzy and, on the whole, unimportant. Even if we had some yardstick that we were satisfied would tell us when an AI reached exactly human level, we don’t expect that point on the yardstick to be a discontinuity in things we care about.
What we care more about is something like “impactfulness,” which is a function of the different capabilities the AI has that might weight skill at computer programming more heavily than skill at controlling a human body. We think there’s plausibly some discontinuity (or at least really steep region) in impactfulness as a function of capabilities, but we don’t know where it’s going to be.
Boundaries are fuzzy and, on the whole, unimportant. Even if we had some yardstick that we were satisfied would tell us when an AI reached exactly human level, we don’t expect that point on the yardstick to be a discontinuity in things we care about.
What we care more about is something like “impactfulness,” which is a function of the different capabilities the AI has that might weight skill at computer programming more heavily than skill at controlling a human body. We think there’s plausibly some discontinuity (or at least really steep region) in impactfulness as a function of capabilities, but we don’t know where it’s going to be.
Still, if you just want to think about ways people try to operationalize the notion of AGI, one starting point might be the resolution criteria for metaculus questions like https://www.metaculus.com/questions/5121/date-of-general-ai/