Cool. Then I think we are in agreement; I agree with everything you’ve just said. (Unfortunately I think that when it matters most, around the time of AGI, we’ll be going at close-to-maximum speed, i.e. we’ll be maybe delaying the creation of superintelligence by like 0 − 6 months relative to if we were pure accelerationists.)
Depends on the exact definitions of both. Let’s say AGI = ‘a drop-in substitute for an OpenAI research engineer’ and ASI = ’Qualitatively at least as good as the best humans at every cognitive task; qualitatively superior on many important cognitive tasks; also, at least 10x faster than humans; also, able to run at least 10,000 copies in parallel in a highly efficient organizational structure (at least as efficient as the most effective human organizations like SpaceX)”
In that case I’d say probably about eight months? Idk. Could be more like eight weeks.
Cool. Then I think we are in agreement; I agree with everything you’ve just said. (Unfortunately I think that when it matters most, around the time of AGI, we’ll be going at close-to-maximum speed, i.e. we’ll be maybe delaying the creation of superintelligence by like 0 − 6 months relative to if we were pure accelerationists.)
How fast do you think that the AI companies could race from AGI to superintelligence assuming no regulation or constraints on their behavior?
Depends on the exact definitions of both. Let’s say AGI = ‘a drop-in substitute for an OpenAI research engineer’ and ASI = ’Qualitatively at least as good as the best humans at every cognitive task; qualitatively superior on many important cognitive tasks; also, at least 10x faster than humans; also, able to run at least 10,000 copies in parallel in a highly efficient organizational structure (at least as efficient as the most effective human organizations like SpaceX)”
In that case I’d say probably about eight months? Idk. Could be more like eight weeks.