RSS

Singularity

TagLast edit: Feb 13, 2021, 9:38 AM by Yoav Ravid

The Singularity or Technological Singularity is a term with a number of different meanings, ranging from a period of rapid change to the creation of greater-than-human intelligence.

See also: Intelligence explosion, Event horizon thesis, Hard takeoff, Soft takeoff

Three Singularity schools

Eliezer Yudkowsky has observed that the varying perspectives on the Singularity can be broadly split into three “major schools”—Accelerating Change (Ray Kurzweil), the Event Horizon (Vernor Vinge), and the Intelligence Explosion (I.J. Good).

The Accelerating Change School observes that, contrary to our intuitive linear expectations about the future, the rate of change of information technology grows exponentially. In the last 200 years, we have seen more technological revolutions than in the last 20,000 before that. Clear examples of this exponentiality include, but are not restricted to: Moore’s law, Internet speed, gene sequencing and the spatial resolution of brain scanning. By projecting these technology growths into the future it becomes possible to imagine what will be possible to engineer in the future. Ray Kurzweil specifically dates the Singularity happening in 2045.

The Event Horizon School asserts that for the entirety of Earth’s history all technological and social progress has been the product of the human mind. However, Vernor Vinge asserts that technology will soon improve on human intelligence either via brain-computer interfaces or Artificial Intelligence or both. Vinge argues since one must be at least as smart as the agent to be predicted, after we create smarter than human agents technological progress will be beyond the comprehension of anything a mere human can imagine now. He called this point in time the Singularity.

The Intelligence explosion School asserts that a positive feedback loop could be created in which an intelligence is making itself smarter, thus getting better at making itself even smarter. A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a dramatic leap in capability very quickly. This scenario does not necessarily rely upon an entirely computing substrate for the explosion to occur, humans with computer augmented brains or genetically altered may also be methods to engineer an Intelligence Explosion. It is this interpretation of the Singularity that Less Wrong broadly focuses on.

Chalmers’ analysis

Philosopher David Chalmers published a significant analysis of the Singularity, focusing on intelligence explosions, in Journal of Consciousness Studies. He performed a very careful analysis of the main premises and arguments for the existence of the singularity. According to him, the main argument is:

—————-

He then proceeds to search for arguments for these 3 premises. Premise 1 seems to be grounded in either Evolutionary argument for human-level AI or Emulation argument for human-level AI. Premise 2 is grounded in the existence and feasibility of an extensibility method for greater-than-human intelligence. Premise 3 is a more general version of premise 2. His analysis of how the singularity could occur defends the likelihood of an intelligence explosion. He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.

References

External links

1960: The Year The Sin­gu­lar­ity Was Cancelled

Scott AlexanderApr 23, 2019, 1:30 AM
104 points
15 comments11 min readLW link1 review
(slatestarcodex.com)

Sin­gu­lar­ity goes main­stream (in philos­o­phy)

lukeprogMar 21, 2011, 3:18 AM
43 points
7 comments1 min readLW link

Hard Takeoff

Eliezer YudkowskyDec 2, 2008, 8:44 PM
35 points
34 comments11 min readLW link

Ver­nor Vinge, who coined the term “Tech­nolog­i­cal Sin­gu­lar­ity”, dies at 79

Kaj_SotalaMar 21, 2024, 10:14 PM
149 points
25 comments1 min readLW link
(arstechnica.com)

How much should we value life?

Adam ZernerSep 6, 2021, 10:30 PM
30 points
25 comments38 min readLW link

A Less Wrong sin­gu­lar­ity ar­ti­cle?

Kaj_SotalaNov 17, 2009, 2:15 PM
31 points
215 comments1 min readLW link

Some Thoughts on Sin­gu­lar­ity Strategies

Wei DaiJul 13, 2011, 2:41 AM
45 points
30 comments3 min readLW link

“The Sin­gu­lar­ity Is Nearer” by Ray Kurzweil—Review

LavenderJul 8, 2024, 9:32 PM
22 points
0 comments4 min readLW link

Propos­ing Hu­man Sur­vival Strat­egy based on the NAIA Vi­sion: Toward the Co-evolu­tion of Di­verse Intelligences

Hiroshi YamakawaFeb 27, 2025, 5:18 AM
−2 points
0 comments11 min readLW link

Eco­nomic Topol­ogy, ASI, and the Sepa­ra­tion Equilibrium

mkualquieraFeb 27, 2025, 4:36 PM
2 points
11 comments6 min readLW link

prÆy

oimrqsJan 11, 2025, 7:42 PM
1 point
0 comments1 min readLW link

Tether­ware #1: The case for hu­man­like AI with free will

Jáchym FibírJan 30, 2025, 10:58 AM
5 points
14 comments10 min readLW link
(tetherware.substack.com)

[Question] What do we *re­ally* ex­pect from a well-al­igned AI?

Jan BetleyJan 4, 2021, 8:57 PM
13 points
10 comments1 min readLW link

A ba­sic math­e­mat­i­cal struc­ture of intelligence

GololApr 12, 2023, 4:49 PM
4 points
6 comments4 min readLW link

Is In­ter­pretabil­ity All We Need?

RogerDearnaleyNov 14, 2023, 5:31 AM
1 point
1 comment1 min readLW link

LLMs May Find It Hard to FOOM

RogerDearnaleyNov 15, 2023, 2:52 AM
11 points
30 comments12 min readLW link

AI Align­ment in The New Yorker

Eleni AngelouMay 17, 2023, 9:36 PM
8 points
0 comments1 min readLW link
(www.newyorker.com)

Muehlhauser-Go­ertzel Dialogue, Part 1

lukeprogMar 16, 2012, 5:12 PM
42 points
161 comments33 min readLW link

Do not miss the cut­off for im­mor­tal­ity! There is a prob­a­bil­ity that you will live for­ever as an im­mor­tal su­per­in­tel­li­gent be­ing and you can in­crease your odds by con­vinc­ing oth­ers to make achiev­ing the tech­nolog­i­cal sin­gu­lar­ity as quickly and safely as pos­si­ble the col­lec­tive goal/​pro­ject of all of hu­man­ity, Similar to “Fable of the Dragon-Tyrant.”

Oliver--KlozoffJun 29, 2023, 3:45 AM
1 point
0 comments28 min readLW link

Re­duc­ing the risk of catas­troph­i­cally mis­al­igned AI by avoid­ing the Sin­gle­ton sce­nario: the Many­ton Variant

GravitasGradientAug 6, 2023, 2:24 PM
−6 points
0 comments3 min readLW link

Out­line of pos­si­ble Sin­gu­lar­ity sce­nar­ios (that are not com­pletely dis­as­trous)

Wei DaiJul 6, 2011, 9:17 PM
41 points
40 comments2 min readLW link

What does the world look like, the day be­fore FAI efforts suc­ceed?

[deleted]Nov 16, 2012, 8:56 PM
36 points
64 comments6 min readLW link

Ascension

nullonesixSep 24, 2023, 5:35 PM
1 point
0 comments5 min readLW link

Can we model tech­nolog­i­cal sin­gu­lar­ity as the phase tran­si­tion?

Valentin2026Dec 26, 2020, 3:20 AM
4 points
3 comments4 min readLW link

Sin­gu­lar­ity&phase tran­si­tion-2. A pri­ori prob­a­bil­ity and ways to check.

Valentin2026Feb 8, 2021, 2:21 AM
1 point
0 comments3 min readLW link

Is the ar­gu­ment that AI is an xrisk valid?

MACannonJul 19, 2021, 1:20 PM
5 points
61 comments1 min readLW link
(onlinelibrary.wiley.com)

[Question] List of con­crete hy­po­thet­i­cals for AI takeover?

YitzApr 7, 2022, 4:54 PM
7 points
5 comments1 min readLW link

Which sin­gu­lar­ity schools plus the no sin­gu­lar­ity school was right?

Noosphere89Jul 23, 2022, 3:16 PM
9 points
26 comments9 min readLW link

Com­plex­ity No Bar to AI (Or, why Com­pu­ta­tional Com­plex­ity mat­ters less than you think for real life prob­lems)

Noosphere89Aug 7, 2022, 7:55 PM
17 points
14 comments3 min readLW link
(www.gwern.net)

[Question] Pink Shog­goths: What does al­ign­ment look like in prac­tice?

Yuli_BanFeb 25, 2023, 12:23 PM
25 points
13 comments11 min readLW link

The hu­man­ity’s biggest mistake

RomanSMar 10, 2023, 4:30 PM
0 points
1 comment2 min readLW link

times­tamp­ing through the Singularity

throwaway918119127Feb 28, 2024, 7:09 PM
−2 points
4 comments8 min readLW link

The fu­ture of Hu­mans: Oper­a­tors of AI

François-Joseph LacroixDec 30, 2023, 11:46 PM
1 point
0 comments1 min readLW link
(medium.com)

An Op­ti­mistic Solu­tion to the Fermi Paradox

Glenn ClaytonMar 10, 2024, 2:39 PM
4 points
6 comments13 min readLW link

Heaven as a Con­strained Boltz­mann Gen­er­a­tor: Ex­plor­ing the Con­cept and Structure

High TidesDec 13, 2024, 1:04 AM
1 point
0 comments4 min readLW link

[Cross­post] Deep Dive: The Com­ing Tech­nolog­i­cal Sin­gu­lar­ity—How to sur­vive in a Post-hu­man Era

simulacra.exeFeb 9, 2024, 6:49 PM
2 points
2 comments9 min readLW link

After Over­mor­row: Scat­tered Mus­ings on the Im­me­di­ate Post-AGI World

Yuli_BanFeb 24, 2024, 3:49 PM
−3 points
0 comments26 min readLW link

[Question] Can sin­gu­lar­ity emerge from trans­form­ers?

MPApr 8, 2024, 2:26 PM
−3 points
1 comment1 min readLW link
No comments.