I think this just isn’t a very helpful definition of AGI, and one which will likely lead people to misinterpret your statements, because it’s so sensitive to the final tasks automated (which might be totally uninteresting). Under this definition time to AGI, and time from AGI to superintelligence, might vary dramatically depending on what you count as an intellectual task.
Hmmm. The phrase is “relevant intellectual tasks.” You are saying people will prematurely declare that AGI has been achieved, months or even years before I would declare it, because they’ll classify as nonrelevant some task which I classify as relevant? (And which AIs still cannot do?) I am skeptical that this will be a problem in practice.
ETA: Also, I’d be interested to hear which alternative definitions you like better! I’m not particularly wedded to this one, I just think it’s better than various other definitions of AGI and waaaay better than TAI or GDP-based definitions.
Centrally, I’m thinking about big important things, like taking over the world, or making R&D go FOOM resulting in some other AI system which can take over the world. But I’m happy to have a broader conception of relevance on a case-by-case basis. Insofar as people have a broader conception of relevance than me, then that means AGI-by-their-definition might come hours or even several months later than AGI-by-my-definition. (The latter would happen in cases where R&D ability is significantly harder than world-takeover-ability. I guess in principle I could see this even resulting in a several-year gap, though I think that’s pretty unlikely.)
I think this just isn’t a very helpful definition of AGI, and one which will likely lead people to misinterpret your statements, because it’s so sensitive to the final tasks automated (which might be totally uninteresting). Under this definition time to AGI, and time from AGI to superintelligence, might vary dramatically depending on what you count as an intellectual task.
Hmmm. The phrase is “relevant intellectual tasks.” You are saying people will prematurely declare that AGI has been achieved, months or even years before I would declare it, because they’ll classify as nonrelevant some task which I classify as relevant? (And which AIs still cannot do?) I am skeptical that this will be a problem in practice.
ETA: Also, I’d be interested to hear which alternative definitions you like better! I’m not particularly wedded to this one, I just think it’s better than various other definitions of AGI and waaaay better than TAI or GDP-based definitions.
Relevant to what?
Centrally, I’m thinking about big important things, like taking over the world, or making R&D go FOOM resulting in some other AI system which can take over the world. But I’m happy to have a broader conception of relevance on a case-by-case basis. Insofar as people have a broader conception of relevance than me, then that means AGI-by-their-definition might come hours or even several months later than AGI-by-my-definition. (The latter would happen in cases where R&D ability is significantly harder than world-takeover-ability. I guess in principle I could see this even resulting in a several-year gap, though I think that’s pretty unlikely.)