Thanks for sharing this! A couple of (maybe naive) things I’m curious about.
Suppose I read ‘AGI’ as ‘Metaculus-AGI’, and we condition on AGI by 2025 — what sort of capabilities do you expect by 2027? I ask because I’m reminded of a very nice (though high-level) list of par-human capabilities for ‘GPT-N’ from an old comment:
My immediate impression says something like: “it seems plausible that we get Metaculus-AGI by 2025, without the AI being par-human at 2, 3, or 6.”[1] This also makes me (instinctively, I’ve thought about this much less than you) more sympathetic to AGI → ASI timelines being >2 years, as the sort-of-hazy picture I have for ‘ASI’ involves (minimally) some unified system that bests humans on all of 1-6. But maybe you think that I’m overestimating the difficulty of reaching these capabilities given AGI, or maybe you have some stronger notion of ‘AGI’ in mind.
The second thing: roughly how independent are the first four statements you offer? I guess I’m wondering if the ‘AGI timelines’ predictions and the ‘AGI → ASI timelines’ predictions “stem from the same model”, as it were. Like, if you condition on ‘No AGI by 2030’, does this have much effect on your predictions about ASI? Or do you take them to be supported by ~independent lines of evidence?
Basically, I think an AI could pass a two-hour adversarial turing test without having the coherence of a human over much longer time-horizons (points 2 and 3). Probably less importantly, I also think that it could meet the Metaculus definition without being search as efficiently over known facts as humans (especially given that AIs will have a much larger set of ‘known facts’ than humans).
Reply to first thing: When I say AGI I mean something which is basically a drop-in substitute for a human remote worker circa 2023, and not just a mediocre one, a good one—e.g. an OpenAI research engineer. This is what matters, because this is the milestone most strongly predictive of massive acceleration in AI R&D.
Arguably metaculus-AGI implies AGI by my definition (actually it’s Ajeya Cotra’s definition) because of the turing test clause. 2-hour + adversarial means anything a human can do remotely in 2 hours, the AI can do too, otherwise the judges would use that as the test. (Granted, this leaves wiggle room for an AI that is as good as a standard human at everything but not as good as OpenAI research engineers at AI research)
Anyhow yeah if we get metaculus-AGI by 2025 then I expect ASI by 2027. ASI = superhuman at every task/skill that matters. So, imagine a mind that combines the best abilities of Von Neumann, Einstein, Tao, etc. for physics and math, but then also has the best abilities of [insert most charismatic leader] and [insert most cunning general] and [insert most brilliant coder] … and so on for everything. Then imagine that in addition to the above, this mind runs at 100x human speed. And it can be copied, and the copies are GREAT at working well together; they form a superorganism/corporation/bureaucracy that is more competent than SpaceX / [insert your favorite competent org].
Re independence: Another good question! Let me think... --I think my credence in 2, conditional on no AGI by 2030, would go down somewhat but not enough that I wouldn’t still endorse it. A lot depends on the reason why we don’t get AGI by 2030. If it’s because AGI turns out to inherently require a ton more compute and training, then I’d be hopeful that ASI would take more than two years after AGI. --3 is independent. --4 maybe would go down slightly but only slightly.
Thanks for sharing this! A couple of (maybe naive) things I’m curious about.
Suppose I read ‘AGI’ as ‘Metaculus-AGI’, and we condition on AGI by 2025 — what sort of capabilities do you expect by 2027? I ask because I’m reminded of a very nice (though high-level) list of par-human capabilities for ‘GPT-N’ from an old comment:
My immediate impression says something like: “it seems plausible that we get Metaculus-AGI by 2025, without the AI being par-human at 2, 3, or 6.”[1] This also makes me (instinctively, I’ve thought about this much less than you) more sympathetic to AGI → ASI timelines being >2 years, as the sort-of-hazy picture I have for ‘ASI’ involves (minimally) some unified system that bests humans on all of 1-6. But maybe you think that I’m overestimating the difficulty of reaching these capabilities given AGI, or maybe you have some stronger notion of ‘AGI’ in mind.
The second thing: roughly how independent are the first four statements you offer? I guess I’m wondering if the ‘AGI timelines’ predictions and the ‘AGI → ASI timelines’ predictions “stem from the same model”, as it were. Like, if you condition on ‘No AGI by 2030’, does this have much effect on your predictions about ASI? Or do you take them to be supported by ~independent lines of evidence?
Basically, I think an AI could pass a two-hour adversarial turing test without having the coherence of a human over much longer time-horizons (points 2 and 3). Probably less importantly, I also think that it could meet the Metaculus definition without being search as efficiently over known facts as humans (especially given that AIs will have a much larger set of ‘known facts’ than humans).
Reply to first thing: When I say AGI I mean something which is basically a drop-in substitute for a human remote worker circa 2023, and not just a mediocre one, a good one—e.g. an OpenAI research engineer. This is what matters, because this is the milestone most strongly predictive of massive acceleration in AI R&D.
Arguably metaculus-AGI implies AGI by my definition (actually it’s Ajeya Cotra’s definition) because of the turing test clause. 2-hour + adversarial means anything a human can do remotely in 2 hours, the AI can do too, otherwise the judges would use that as the test. (Granted, this leaves wiggle room for an AI that is as good as a standard human at everything but not as good as OpenAI research engineers at AI research)
Anyhow yeah if we get metaculus-AGI by 2025 then I expect ASI by 2027. ASI = superhuman at every task/skill that matters. So, imagine a mind that combines the best abilities of Von Neumann, Einstein, Tao, etc. for physics and math, but then also has the best abilities of [insert most charismatic leader] and [insert most cunning general] and [insert most brilliant coder] … and so on for everything. Then imagine that in addition to the above, this mind runs at 100x human speed. And it can be copied, and the copies are GREAT at working well together; they form a superorganism/corporation/bureaucracy that is more competent than SpaceX / [insert your favorite competent org].
Re independence: Another good question! Let me think...
--I think my credence in 2, conditional on no AGI by 2030, would go down somewhat but not enough that I wouldn’t still endorse it. A lot depends on the reason why we don’t get AGI by 2030. If it’s because AGI turns out to inherently require a ton more compute and training, then I’d be hopeful that ASI would take more than two years after AGI.
--3 is independent.
--4 maybe would go down slightly but only slightly.