I’ve seen the terms AGI and ASI floated around quite a bit, usually with the assumption that the reader knows what they are. From what I’ve seen it’s generally presumed that an AGI is an artificial intelligence that is qualitatively human, in that it can do any task and generalize to new unseen tasks at least as well as a human can. An ASI, by contrast, is an intelligence that can supersede any human at any task, it’s by definition superhuman. From my point of view these definitions are rather ambiguous. There will be certain tasks that are easier for the machine. For example, an AGI may be great at learning new languages, performing arithmetic, and coding, but if you try to make it understand what it feels like to have limbs or to run a marathon, depending on how it’s designed it might come up short. Is an AGI still an AGI if some tasks might be outside the scope of what it’s capable of experiencing? I think this is a pretty important question because at what point do we consider something to be as qualitatively intelligent as a human, would it need to be able to experience and do everything a human can?
Boundaries are fuzzy and, on the whole, unimportant. Even if we had some yardstick that we were satisfied would tell us when an AI reached exactly human level, we don’t expect that point on the yardstick to be a discontinuity in things we care about.
What we care more about is something like “impactfulness,” which is a function of the different capabilities the AI has that might weight skill at computer programming more heavily than skill at controlling a human body. We think there’s plausibly some discontinuity (or at least really steep region) in impactfulness as a function of capabilities, but we don’t know where it’s going to be.
Still, if you just want to think about ways people try to operationalize the notion of AGI, one starting point might be the resolution criteria for metaculus questions like https://www.metaculus.com/questions/5121/date-of-general-ai/