I admit, I was interpreting him in the first sense, that even if we got an aligned AGI, we would need to stop others from building unaligned AGIs, but I also see your interpretation as plausible too, and under this model, I agree that we’d ideally like to not have a maximum-speed race, and go somewhat slower as we get closer to AGI and ASI.
I think a maximum sprint to get more capabilities is also quite bad, though conditional on that happening, I don’t think we’d automatically be doomed, and there’s a non-trivial, but far too low chance that everything works out.
Cool. Then I think we are in agreement; I agree with everything you’ve just said. (Unfortunately I think that when it matters most, around the time of AGI, we’ll be going at close-to-maximum speed, i.e. we’ll be maybe delaying the creation of superintelligence by like 0 − 6 months relative to if we were pure accelerationists.)
Depends on the exact definitions of both. Let’s say AGI = ‘a drop-in substitute for an OpenAI research engineer’ and ASI = ’Qualitatively at least as good as the best humans at every cognitive task; qualitatively superior on many important cognitive tasks; also, at least 10x faster than humans; also, able to run at least 10,000 copies in parallel in a highly efficient organizational structure (at least as efficient as the most effective human organizations like SpaceX)”
In that case I’d say probably about eight months? Idk. Could be more like eight weeks.
I admit, I was interpreting him in the first sense, that even if we got an aligned AGI, we would need to stop others from building unaligned AGIs, but I also see your interpretation as plausible too, and under this model, I agree that we’d ideally like to not have a maximum-speed race, and go somewhat slower as we get closer to AGI and ASI.
I think a maximum sprint to get more capabilities is also quite bad, though conditional on that happening, I don’t think we’d automatically be doomed, and there’s a non-trivial, but far too low chance that everything works out.
Cool. Then I think we are in agreement; I agree with everything you’ve just said. (Unfortunately I think that when it matters most, around the time of AGI, we’ll be going at close-to-maximum speed, i.e. we’ll be maybe delaying the creation of superintelligence by like 0 − 6 months relative to if we were pure accelerationists.)
How fast do you think that the AI companies could race from AGI to superintelligence assuming no regulation or constraints on their behavior?
Depends on the exact definitions of both. Let’s say AGI = ‘a drop-in substitute for an OpenAI research engineer’ and ASI = ’Qualitatively at least as good as the best humans at every cognitive task; qualitatively superior on many important cognitive tasks; also, at least 10x faster than humans; also, able to run at least 10,000 copies in parallel in a highly efficient organizational structure (at least as efficient as the most effective human organizations like SpaceX)”
In that case I’d say probably about eight months? Idk. Could be more like eight weeks.