My $0.02: singularities brought about by recursive self-improvement are one concept, and singularities involving really-really-fast improvement are a different concept. (They are, of course, perfectly compatible.)
It may just not be all that useful to have a single word that denotes both.
If I want to talk about a “hard take-off” or a “step-function” scenario caused by recursively self-improving intelligence, I can say that.
But I estimate that 90% of what I will want to say about it will be true of many different step-function scenarios (e.g., those caused by the discovery of a cache of Ancient technology) or true of many different recursively self-improving intelligence scenarios.
So it may be worthwhile to actually have to stop and think about whether I want to include both clauses.
However, It does seem that we talk about “hard take-off scenario caused by recursively self-improving intelligence” often enough to warrant a convenience term to mean just that. Much of the discussion about cascades, cycles, insights, AI-boxes, resource overhangs etc are specific to the recursive self-improvement scenario, and not to, e.g. the cache of Ancient tech scenario.
My $0.02: singularities brought about by recursive self-improvement are one concept, and singularities involving really-really-fast improvement are a different concept. (They are, of course, perfectly compatible.)
It may just not be all that useful to have a single word that denotes both.
If I want to talk about a “hard take-off” or a “step-function” scenario caused by recursively self-improving intelligence, I can say that.
But I estimate that 90% of what I will want to say about it will be true of many different step-function scenarios (e.g., those caused by the discovery of a cache of Ancient technology) or true of many different recursively self-improving intelligence scenarios.
So it may be worthwhile to actually have to stop and think about whether I want to include both clauses.
Completely agree with paras 1 and 2.
However, It does seem that we talk about “hard take-off scenario caused by recursively self-improving intelligence” often enough to warrant a convenience term to mean just that. Much of the discussion about cascades, cycles, insights, AI-boxes, resource overhangs etc are specific to the recursive self-improvement scenario, and not to, e.g. the cache of Ancient tech scenario.