There’s actually two different parts to the answer, and the difference is important. There is the time between now and the first AI capable of autonomously improving itself (time to AGI), and there’s the time it takes for the AI to “foom”, meaning improve itself from a roughly human level towards godhood. In EY’s view, it doesn’t matter at all how long we have between now and AGI, because foom will happen so quickly and will be so decisive that no one will be able to respond and stop it. (Maybe, if we had 200 years we could solve it, but we don’t.) In other people’s view (including Robin Hanson and Paul Christiano, I think) there will be “slow takeoff.” In this view, AI will gradually improve itself over years, probably working with human researchers in that time but progressively gathering more autonomy and skills. Hanson and Christiano agree with EY that doom is likely. In fact, in the slow takeoff view ASI might arrive even sooner than in the fast takeoff view.
Isn’t it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn’t it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn’t sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn’t it?
There’s actually two different parts to the answer, and the difference is important. There is the time between now and the first AI capable of autonomously improving itself (time to AGI), and there’s the time it takes for the AI to “foom”, meaning improve itself from a roughly human level towards godhood. In EY’s view, it doesn’t matter at all how long we have between now and AGI, because foom will happen so quickly and will be so decisive that no one will be able to respond and stop it. (Maybe, if we had 200 years we could solve it, but we don’t.) In other people’s view (including Robin Hanson and Paul Christiano, I think) there will be “slow takeoff.” In this view, AI will gradually improve itself over years, probably working with human researchers in that time but progressively gathering more autonomy and skills. Hanson and Christiano agree with EY that doom is likely. In fact, in the slow takeoff view ASI might arrive even sooner than in the fast takeoff view.
I’m not sure about Hanson, but Christiano is a lot more optimistic than EY.
Isn’t it conceivable that improving intelligence turns out to become difficult more quickly than the AI is scaling? E.g. couldn’t it be that somewhere around human level intelligence, improving intelligence by every marginal percent becomes twice as difficult as the previous percent? I admit that doesn’t sound very likely, but if that was the case, then even a self-improving AI would potentially improve itself very slowly, and maybe even sub-linear rather than exponentially, wouldn’t it?