4 respondents see a hard takeoff as likely (at varying degrees of hardness), and 1 finds it unlikely
Do people who say that hard takeoff is unlikely mean that they expect rapid recursive self-improvement to happen only after the AI is already very powerful? Presumably most people agree that a sufficiently smart AI will be able to cause an intelligence explosion?
Interesting question! I’m afraid I didn’t probe the cruxes of those who don’t expect hard takeoff. But my guess is that you’re right—no hard takeoff ~= the most transformative effects happen before recursive self-improvement
Do people who say that hard takeoff is unlikely mean that they expect rapid recursive self-improvement to happen only after the AI is already very powerful? Presumably most people agree that a sufficiently smart AI will be able to cause an intelligence explosion?
Interesting question! I’m afraid I didn’t probe the cruxes of those who don’t expect hard takeoff. But my guess is that you’re right—no hard takeoff ~= the most transformative effects happen before recursive self-improvement