I feel confused about your “pre-AGI”/”post-AGI” distinction. I expect that there will be a period of months or even years during which whether or not we’ve built “AGI” is up for debate. Given this, it feels very odd to say that takeoff might happen weeks after reaching AGI, because the takeoff period would then be much shorter than the uncertainty period.
By AGI I mean AI systems which can do every relevant intellectual task human professionals can do, only cheaper and faster. Because of variation, by the time we get AGI, we’ll have AI systems which are strongly superhuman at many relevant intellectual tasks. I feel fairly confident that by the time AGI exists, we’ll be months away from superintelligence at most, and possibly just hours. Absent defeaters such as the relevant powers coordinating to slow down the R&D. Main alternative is if the only available ways to significantly improve intelligence in the bottleneck dimensions is to do larger, longer training runs. Even then, months seems like a plausible timeframe, though admittedly it could take maybe two or three years. I’m not sure I agree with your expectation. I do think there’ll be lots of FUD and uncertainty around AGI, there already is. But this is consistent with the above claims.
I think this just isn’t a very helpful definition of AGI, and one which will likely lead people to misinterpret your statements, because it’s so sensitive to the final tasks automated (which might be totally uninteresting). Under this definition time to AGI, and time from AGI to superintelligence, might vary dramatically depending on what you count as an intellectual task.
Hmmm. The phrase is “relevant intellectual tasks.” You are saying people will prematurely declare that AGI has been achieved, months or even years before I would declare it, because they’ll classify as nonrelevant some task which I classify as relevant? (And which AIs still cannot do?) I am skeptical that this will be a problem in practice.
ETA: Also, I’d be interested to hear which alternative definitions you like better! I’m not particularly wedded to this one, I just think it’s better than various other definitions of AGI and waaaay better than TAI or GDP-based definitions.
Centrally, I’m thinking about big important things, like taking over the world, or making R&D go FOOM resulting in some other AI system which can take over the world. But I’m happy to have a broader conception of relevance on a case-by-case basis. Insofar as people have a broader conception of relevance than me, then that means AGI-by-their-definition might come hours or even several months later than AGI-by-my-definition. (The latter would happen in cases where R&D ability is significantly harder than world-takeover-ability. I guess in principle I could see this even resulting in a several-year gap, though I think that’s pretty unlikely.)
I feel confused about your “pre-AGI”/”post-AGI” distinction. I expect that there will be a period of months or even years during which whether or not we’ve built “AGI” is up for debate. Given this, it feels very odd to say that takeoff might happen weeks after reaching AGI, because the takeoff period would then be much shorter than the uncertainty period.
By AGI I mean AI systems which can do every relevant intellectual task human professionals can do, only cheaper and faster. Because of variation, by the time we get AGI, we’ll have AI systems which are strongly superhuman at many relevant intellectual tasks.
I feel fairly confident that by the time AGI exists, we’ll be months away from superintelligence at most, and possibly just hours. Absent defeaters such as the relevant powers coordinating to slow down the R&D. Main alternative is if the only available ways to significantly improve intelligence in the bottleneck dimensions is to do larger, longer training runs. Even then, months seems like a plausible timeframe, though admittedly it could take maybe two or three years.
I’m not sure I agree with your expectation. I do think there’ll be lots of FUD and uncertainty around AGI, there already is. But this is consistent with the above claims.
I think this just isn’t a very helpful definition of AGI, and one which will likely lead people to misinterpret your statements, because it’s so sensitive to the final tasks automated (which might be totally uninteresting). Under this definition time to AGI, and time from AGI to superintelligence, might vary dramatically depending on what you count as an intellectual task.
Hmmm. The phrase is “relevant intellectual tasks.” You are saying people will prematurely declare that AGI has been achieved, months or even years before I would declare it, because they’ll classify as nonrelevant some task which I classify as relevant? (And which AIs still cannot do?) I am skeptical that this will be a problem in practice.
ETA: Also, I’d be interested to hear which alternative definitions you like better! I’m not particularly wedded to this one, I just think it’s better than various other definitions of AGI and waaaay better than TAI or GDP-based definitions.
Relevant to what?
Centrally, I’m thinking about big important things, like taking over the world, or making R&D go FOOM resulting in some other AI system which can take over the world. But I’m happy to have a broader conception of relevance on a case-by-case basis. Insofar as people have a broader conception of relevance than me, then that means AGI-by-their-definition might come hours or even several months later than AGI-by-my-definition. (The latter would happen in cases where R&D ability is significantly harder than world-takeover-ability. I guess in principle I could see this even resulting in a several-year gap, though I think that’s pretty unlikely.)