I’ve seen the terms AGI and ASI floated around quite a bit, usually with the assumption that the reader knows what they are. From what I’ve seen it’s generally presumed that an AGI is an artificial intelligence that is qualitatively human, in that it can do any task and generalize to new unseen tasks at least as well as a human can. An ASI, by contrast, is an intelligence that can supersede any human at any task, it’s by definition superhuman. From my point of view these definitions are rather ambiguous. There will be certain tasks that are easier for the machine. For example, an AGI may be great at learning new languages, performing arithmetic, and coding, but if you try to make it understand what it feels like to have limbs or to run a marathon, depending on how it’s designed it might come up short. Is an AGI still an AGI if some tasks might be outside the scope of what it’s capable of experiencing? I think this is a pretty important question because at what point do we consider something to be as qualitatively intelligent as a human, would it need to be able to experience and do everything a human can?
[Question] Clarification on Definition of AGI
I’ve seen the terms AGI and ASI floated around quite a bit, usually with the assumption that the reader knows what they are. From what I’ve seen it’s generally presumed that an AGI is an artificial intelligence that is qualitatively human, in that it can do any task and generalize to new unseen tasks at least as well as a human can. An ASI, by contrast, is an intelligence that can supersede any human at any task, it’s by definition superhuman. From my point of view these definitions are rather ambiguous. There will be certain tasks that are easier for the machine. For example, an AGI may be great at learning new languages, performing arithmetic, and coding, but if you try to make it understand what it feels like to have limbs or to run a marathon, depending on how it’s designed it might come up short. Is an AGI still an AGI if some tasks might be outside the scope of what it’s capable of experiencing? I think this is a pretty important question because at what point do we consider something to be as qualitatively intelligent as a human, would it need to be able to experience and do everything a human can?