IMO, soft/smooth/gradual still convey wrong impressions. They still sound like “slow takeoff”, they sound like the progress would be steady enough that normal people would have time to orient to what’s happening, keep track, and exert control.
That is exactly the meaning that I’d thought was standard for “soft takeoff” (and which I assumed was synonymous with “slow takeoff”), e.g. as I wrote in 2012:
Bugaj and Goertzel (2007) consider three kinds of AGI scenarios: capped intelligence, soft takeoff, and hard takeoff. In a capped intelligence scenario, all AGIs are prevented from exceeding a predetermined level of intelligence and remain at a level
roughly comparable with humans. In a soft takeoff scenario, AGIs become far more
powerful than humans, but on a timescale which permits ongoing human interaction
during the ascent. Time is not of the essence, and learning proceeds at a relatively
human-like pace. In a hard takeoff scenario, an AGI will undergo an extraordinarily fast
increase in power, taking effective control of the world within a few years or less. [Footnote: Bugaj and Goertzel defined hard takeoff to refer to a period of months or less. We have chosen a
somewhat longer time period, as even a few years might easily turn out to be too little time for society to
properly react.] In
this scenario, there is little time for error correction or a gradual tuning of the AGI’s
goals.
(B&G didn’t actually invent soft/hard takeoff, but it was the most formal-looking cite we could find.)
That is exactly the meaning that I’d thought was standard for “soft takeoff” (and which I assumed was synonymous with “slow takeoff”), e.g. as I wrote in 2012:
(B&G didn’t actually invent soft/hard takeoff, but it was the most formal-looking cite we could find.)