Reply to first thing: When I say AGI I mean something which is basically a drop-in substitute for a human remote worker circa 2023, and not just a mediocre one, a good one—e.g. an OpenAI research engineer. This is what matters, because this is the milestone most strongly predictive of massive acceleration in AI R&D.
Arguably metaculus-AGI implies AGI by my definition (actually it’s Ajeya Cotra’s definition) because of the turing test clause. 2-hour + adversarial means anything a human can do remotely in 2 hours, the AI can do too, otherwise the judges would use that as the test. (Granted, this leaves wiggle room for an AI that is as good as a standard human at everything but not as good as OpenAI research engineers at AI research)
Anyhow yeah if we get metaculus-AGI by 2025 then I expect ASI by 2027. ASI = superhuman at every task/skill that matters. So, imagine a mind that combines the best abilities of Von Neumann, Einstein, Tao, etc. for physics and math, but then also has the best abilities of [insert most charismatic leader] and [insert most cunning general] and [insert most brilliant coder] … and so on for everything. Then imagine that in addition to the above, this mind runs at 100x human speed. And it can be copied, and the copies are GREAT at working well together; they form a superorganism/corporation/bureaucracy that is more competent than SpaceX / [insert your favorite competent org].
Re independence: Another good question! Let me think... --I think my credence in 2, conditional on no AGI by 2030, would go down somewhat but not enough that I wouldn’t still endorse it. A lot depends on the reason why we don’t get AGI by 2030. If it’s because AGI turns out to inherently require a ton more compute and training, then I’d be hopeful that ASI would take more than two years after AGI. --3 is independent. --4 maybe would go down slightly but only slightly.
Reply to first thing: When I say AGI I mean something which is basically a drop-in substitute for a human remote worker circa 2023, and not just a mediocre one, a good one—e.g. an OpenAI research engineer. This is what matters, because this is the milestone most strongly predictive of massive acceleration in AI R&D.
Arguably metaculus-AGI implies AGI by my definition (actually it’s Ajeya Cotra’s definition) because of the turing test clause. 2-hour + adversarial means anything a human can do remotely in 2 hours, the AI can do too, otherwise the judges would use that as the test. (Granted, this leaves wiggle room for an AI that is as good as a standard human at everything but not as good as OpenAI research engineers at AI research)
Anyhow yeah if we get metaculus-AGI by 2025 then I expect ASI by 2027. ASI = superhuman at every task/skill that matters. So, imagine a mind that combines the best abilities of Von Neumann, Einstein, Tao, etc. for physics and math, but then also has the best abilities of [insert most charismatic leader] and [insert most cunning general] and [insert most brilliant coder] … and so on for everything. Then imagine that in addition to the above, this mind runs at 100x human speed. And it can be copied, and the copies are GREAT at working well together; they form a superorganism/corporation/bureaucracy that is more competent than SpaceX / [insert your favorite competent org].
Re independence: Another good question! Let me think...
--I think my credence in 2, conditional on no AGI by 2030, would go down somewhat but not enough that I wouldn’t still endorse it. A lot depends on the reason why we don’t get AGI by 2030. If it’s because AGI turns out to inherently require a ton more compute and training, then I’d be hopeful that ASI would take more than two years after AGI.
--3 is independent.
--4 maybe would go down slightly but only slightly.