Do you think the singularity (technological singularity) is a useful term? I’ve been seeing it used less among people talking about the future of humanity and I don’t understand why. Many people still think an intelligence explosion is likely, even if it’s “slow”
‘rapid capability gain’ = progress from pretty-low-impact AI to astronomically high-impact AI is fast in absolute terms
‘hard takeoff’ = rapid capability gain that’s discontinuous
‘intelligence explosion’ = hard takeoff via recursive self-improvement
Eliezer says:
“‘Rapid capability gain’ is, I’d say, going from ‘capable enough to do moderately neat non-pivotal world-affecting things’ to ‘capable enough to destroy world’ quickly in absolute terms.
“I don’t think it’s about ‘subhuman’ because, like, is Alpha Zero subhuman, things go superhuman in bits and pieces until, in some sense, all hell breaks loose”
FOOM is a synonym for intelligence explosion, based on the analogy where an AGI recursively self-improving to superintelligence is like a nuclear pile going critical.
Sometimes people also talk about already-pretty-smart-and-impactful AI “going FOOM”, which I take to mean that they’re shooting off to even higher capability levels.
Jeffrey replied:
Okay I appreciate these distinctions. I think the difficulty of replacing “singularity” with “intelligence explosion” is that the latter sounds like a process rather than an outcome. I want to refer to the outcome
To which Nate Soares responded:
In my vocab, “singularity” refers to something more like an event (& the term comes from Vinge noting that the dawn of superintelligence obscures our predictive vision). I still use “singularity” for the event, and “post-singularity” for the time regime.
I suspect there’s a school of thought for which “singularity” was massively overoptimistic—is this what you mean by Kurzweilian magical thinking? That it’s a transition in a very short period of time from scarcity-based capitalism to post-scarcity utopia. Rather than a simple destruction of most of humanity, and of the freedom and value of those remaining.
That it’s a transition in a very short period of time from scarcity-based capitalism to post-scarcity utopia.
No, that part of Kurzweil’s view is 100% fine. In fact, I believe I expect a sharper transition than Kurzweil expects. My objection to Kurzweil’s thinking isn’t ‘realistic mature futurists are supposed to be pessimistic across the board’, it’s specific unsupported flaws in his arguments:
Rejection of Eliezer’s five theses (which were written in response to Kurzweil): intelligence explosion, orthogonality, convergent instrumental goals, complexity of value, fragility of value.
Otherwise weird and un-Bayesian-sounding attitudes toward forecasting. Seems to think he has a crystal ball that lets him exactly time tech developments, even where he has no model of a causal path by which he could be entangled with evidence about that future development...?
Jeffrey Ladish asked on Twitter:
I replied:
FOOM is a synonym for intelligence explosion, based on the analogy where an AGI recursively self-improving to superintelligence is like a nuclear pile going critical.
Sometimes people also talk about already-pretty-smart-and-impactful AI “going FOOM”, which I take to mean that they’re shooting off to even higher capability levels.
Jeffrey replied:
To which Nate Soares responded:
I suspect there’s a school of thought for which “singularity” was massively overoptimistic—is this what you mean by Kurzweilian magical thinking? That it’s a transition in a very short period of time from scarcity-based capitalism to post-scarcity utopia. Rather than a simple destruction of most of humanity, and of the freedom and value of those remaining.
No, that part of Kurzweil’s view is 100% fine. In fact, I believe I expect a sharper transition than Kurzweil expects. My objection to Kurzweil’s thinking isn’t ‘realistic mature futurists are supposed to be pessimistic across the board’, it’s specific unsupported flaws in his arguments:
Rejection of Eliezer’s five theses (which were written in response to Kurzweil): intelligence explosion, orthogonality, convergent instrumental goals, complexity of value, fragility of value.
Mystical, quasi-Hegelian thinking about surface trends like ‘economic growth’. See the ‘Actual Ray Kurzweil’ quote in https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works.
Otherwise weird and un-Bayesian-sounding attitudes toward forecasting. Seems to think he has a crystal ball that lets him exactly time tech developments, even where he has no model of a causal path by which he could be entangled with evidence about that future development...?