RSS

Re­cur­sive Self-Improvement

TagLast edit: Dec 30, 2024, 9:48 AM by Dakara

Recursive Self-Improvement refers to the property of making improvements on one’s own ability of making self-improvements. It is an approach to Artificial General Intelligence that allows a system to make adjustments to its own functionality resulting in improved performance. The system could then feedback on itself with each cycle reaching ever higher levels of intelligence resulting in either a hard or soft AI takeoff.

An agent can self-improve and get a linear succession of improvements, however if it is able to improve its ability of making self-improvements, then each step will yield exponentially more improvements then the previous one.

Recursive self-improvement and AI takeoff

Recursively self-improving AI is considered to be the push behind the intelligence explosion. While any sufficiently intelligent AI will be able to improve itself, Seed AIs are specifically designed to use recursive self-improvement as their primary method of gaining intelligence. Architectures that had not been designed with this goal in mind, such as neural networks or large “hand-coded” projects like Cyc, would have a harder time self-improving.

Eliezer Yudkowsky argues that a recursively self-improvement AI seems likely to deliver a hard AI takeoff – a fast, abruptly, local increase in capability—since the exponential increase in intelligence would yield an exponential return in benefits and resources that would feed even more returns in the next step, and so on. In his view a soft takeoff scenario seems unlikely: “it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.”1.

Yudkowsky argues that there are several points which seem to support the hard takeoff scenario. Some of them are the fact that one improvement seems to lead the way to another, hardware overhang and the fact that sometimes- when navigating through problem space—one can find a succession of extremely easy to solve problems. These are all reasons for suddenly and abruptly increases in capability. On the other hand, Robin Hanson argues that there will be mostly a slow and gradual accumulation of improvements, without a sharp change.

Self-improvement in humans

The human species has made an enormous amount of progress since evolving around fifty thousand years ago. This is because we can pass on knowledge and infrastructure from previous generations. This is a type of self-improvement, but it is not recursive. If we never learned to modify our own brains, then we would eventually reach the point where making new discoveries required more knowledge than could be gained in a human lifetime. All human progress to date has been limited by the hardware we are born with, which is the same hardware Homo sapiens were born with fifty thousand years ago.

“True” recursive self-improvement will come when we discover how to drastically modify or augment our own brains in order to be more intelligent. This would lead us to more quickly being able to discover how to become even more intelligent.

Recursive self-improvement and Instrumental value

Nick Bostrom and Steve Omohundro have separately2 argued3 that despite the fact that values and intelligence are independent, any recursively self-improving intelligence would likely possess a common set of instrumental values which are useful for achieving any kind of goal. As a system’s intelligence continued modifying itself towards greater intelligence, it would be likely to adopt more of these behaviors.

Blog posts

See also

External links

Why all the fuss about re­cur­sive self-im­prove­ment?

So8resJun 12, 2022, 8:53 PM
158 points
62 comments7 min readLW link1 review

Notes on notes on virtues

David GrossDec 30, 2020, 5:47 PM
71 points
11 comments11 min readLW link

AGI sys­tems & hu­mans will both need to solve the al­ign­ment problem

Jeffrey LadishFeb 24, 2023, 3:29 AM
59 points
14 comments4 min readLW link

Towards a For­mal­i­sa­tion of Re­turns on Cog­ni­tive Rein­vest­ment (Part 1)

DragonGodJun 4, 2022, 6:42 PM
17 points
11 comments13 min readLW link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

Nov 18, 2021, 10:19 PM
130 points
61 comments39 min readLW link1 review

AGI will be made of het­ero­ge­neous com­po­nents, Trans­former and Selec­tive SSM blocks will be among them

Roman LeventovDec 27, 2023, 2:51 PM
33 points
9 comments4 min readLW link

o1: A Tech­ni­cal Primer

Jesse HooglandDec 9, 2024, 7:09 PM
170 points
19 comments9 min readLW link
(www.youtube.com)

Re­solv­ing von Neu­mann-Mor­gen­stern In­con­sis­tent Preferences

niplavOct 22, 2024, 11:45 AM
38 points
5 comments58 min readLW link

Will AI R&D Au­toma­tion Cause a Soft­ware In­tel­li­gence Ex­plo­sion?

Mar 26, 2025, 6:12 PM
17 points
3 comments2 min readLW link
(www.forethought.org)

AI Will Not Want to Self-Improve

petersalibMay 16, 2023, 8:53 PM
20 points
24 comments20 min readLW link

Nice in­tro video to RSI

Nathan Helm-BurgerMay 16, 2023, 6:48 PM
12 points
0 comments1 min readLW link
(youtu.be)

AGI-Au­to­mated In­ter­pretabil­ity is Suicide

__RicG__May 10, 2023, 2:20 PM
25 points
33 comments7 min readLW link

“text­books are all you need”

bhauthJun 21, 2023, 5:06 PM
66 points
18 comments2 min readLW link
(arxiv.org)

Con­tra An­ton 🏴‍☠️ on Kol­mogorov com­plex­ity and re­cur­sive self improvement

DaemonicSigilJun 30, 2023, 5:15 AM
25 points
12 comments2 min readLW link

Re­cur­sive Self-Improvement

Eliezer YudkowskyDec 1, 2008, 8:49 PM
38 points
54 comments13 min readLW link

Re­cur­sively Self-Im­prov­ing Hu­man Intelligence

curiousepicFeb 17, 2011, 9:55 PM
17 points
13 comments1 min readLW link

...Re­cur­sion, Magic

Eliezer YudkowskyNov 25, 2008, 9:10 AM
34 points
28 comments5 min readLW link

Cas­cades, Cy­cles, In­sight...

Eliezer YudkowskyNov 24, 2008, 9:33 AM
35 points
31 comments8 min readLW link

Self-im­prove­ment with­out self-modification

Stuart_ArmstrongJul 23, 2015, 9:59 AM
7 points
5 comments1 min readLW link

Stable self-im­prove­ment as a re­search problem

paulfchristianoNov 17, 2014, 5:51 PM
8 points
7 comments7 min readLW link

Ex­am­ples of AI In­creas­ing AI Progress

TW123Jul 17, 2022, 8:06 PM
107 points
14 comments1 min readLW link

[Question] Is “Re­cur­sive Self-Im­prove­ment” Rele­vant in the Deep Learn­ing Paradigm?

DragonGodApr 6, 2023, 7:13 AM
32 points
36 comments7 min readLW link

A Year of AI In­creas­ing AI Progress

TW123Dec 30, 2022, 2:09 AM
148 points
3 comments2 min readLW link

Will Values and Com­pe­ti­tion De­cou­ple?

intersticeSep 28, 2022, 4:27 PM
15 points
11 comments17 min readLW link

Squeez­ing foun­da­tions re­search as­sis­tance out of for­mal logic nar­row AI.

Donald HobsonMar 8, 2023, 9:38 AM
16 points
1 comment2 min readLW link

Notes on Gratitude

David GrossJan 13, 2021, 8:37 PM
11 points
0 comments19 min readLW link

Quan­tify­ing the Qual­i­ta­tive: Towards a Bayesian Ap­proach to Per­sonal Insight

Pruthvi KumarFeb 15, 2025, 7:50 PM
1 point
0 comments6 min readLW link

[Question] What’s your view­point on the like­li­hood of GPT-5 be­ing able to au­tonomously cre­ate, train, and im­ple­ment an AI su­pe­rior to GPT-5?

Super AGIMay 26, 2023, 1:43 AM
7 points
15 comments1 min readLW link

Pro­posal: labs should pre­com­mit to paus­ing if an AI ar­gues for it­self to be improved

NickGabsJun 2, 2023, 10:31 PM
3 points
3 comments4 min readLW link

hu­man in­tel­li­gence may be al­ign­ment-limited

bhauthJun 15, 2023, 10:32 PM
16 points
3 comments2 min readLW link

Why Copi­lot Ac­cel­er­ates Timelines

Michaël TrazziApr 26, 2022, 10:06 PM
35 points
14 comments7 min readLW link

Do not miss the cut­off for im­mor­tal­ity! There is a prob­a­bil­ity that you will live for­ever as an im­mor­tal su­per­in­tel­li­gent be­ing and you can in­crease your odds by con­vinc­ing oth­ers to make achiev­ing the tech­nolog­i­cal sin­gu­lar­ity as quickly and safely as pos­si­ble the col­lec­tive goal/​pro­ject of all of hu­man­ity, Similar to “Fable of the Dragon-Tyrant.”

Oliver--KlozoffJun 29, 2023, 3:45 AM
1 point
0 comments28 min readLW link

The Hard In­tel­li­gence Hy­poth­e­sis and Its Bear­ing on Suc­ces­sion In­duced Foom

DragonGodMay 31, 2022, 7:04 PM
10 points
7 comments4 min readLW link

ACI#4: Seed AI is the new Per­pet­ual Mo­tion Machine

Akira PyinyaJul 8, 2023, 1:17 AM
−1 points
0 comments6 min readLW link

The Struc­tural Sin­gu­lar­ity of Self-Op­ti­miz­ing AI: When Re­cur­sive Pre­dic­tion Causes In­ter­nal Collapse

KaedeHamasakiApr 6, 2025, 4:40 PM
1 point
0 comments1 min readLW link

1 What If We Re­build Mo­ti­va­tion with the Fermi ESTIMATion?

P. JoãoDec 17, 2024, 7:46 AM
6 points
0 comments3 min readLW link

Align­ment Might Never Be Solved, By Hu­mans or AI

intersticeOct 7, 2022, 4:14 PM
49 points
6 comments3 min readLW link

truth.in­tegrity(): A Re­cur­sive Frame­work for Hal­lu­ci­na­tion Preven­tion and Alignment

brittneyluongApr 2, 2025, 5:52 PM
1 point
0 comments2 min readLW link

A Sim­ple The­ory Of Consciousness

SherlockHolmesAug 8, 2023, 6:05 PM
2 points
5 comments1 min readLW link
(peterholmes.medium.com)

Virtue ethics and why the ra­tio­nal­ist com­mu­nity might care about it.

David GrossOct 22, 2020, 3:53 AM
36 points
2 comments6 min readLW link

What pro­gram struc­tures en­able effi­cient in­duc­tion?

Daniel CSep 5, 2024, 10:12 AM
23 points
5 comments3 min readLW link

0 Mo­ti­va­tion Map­ping through In­for­ma­tion Theory

P. JoãoApr 18, 2025, 12:53 AM
7 points
0 comments26 min readLW link

notes on pri­ori­tiz­ing tasks & cog­ni­tion-threads

EmrikNov 26, 2024, 12:28 AM
3 points
1 comment4 min readLW link

Why Re­cur­sive Self-Im­prove­ment Might Not Be the Ex­is­ten­tial Risk We Fear

Nassim_ANov 24, 2024, 5:17 PM
1 point
0 comments9 min readLW link

8 PRIME IDENTITIES – A con­struc­tion from MaxEnt In­for­ma­tional Effi­ciency in 4 questions

P. JoãoApr 16, 2025, 4:53 PM
3 points
0 comments6 min readLW link

8 PRIME IDENTITIES - An analisis

P. JoãoApr 17, 2025, 11:36 AM
−5 points
0 comments2 min readLW link

8 PRIME IDENTITIES - A sim­plified con­struc­tion from MaxEnt In­for­ma­tional Effi­ciency in 4 questions

P. JoãoApr 17, 2025, 11:04 AM
3 points
5 comments3 min readLW link

Gam­ify life from BayesianMind

P. JoãoApr 16, 2025, 4:17 PM
6 points
2 comments1 min readLW link

En­gelbart: In­suffi­ciently Recursive

Eliezer YudkowskyNov 26, 2008, 8:31 AM
22 points
22 comments7 min readLW link

The al­ign­ment sta­bil­ity problem

Seth HerdMar 26, 2023, 2:10 AM
35 points
15 comments4 min readLW link

If Align­ment is Hard, then so is Self-Improvement

PavleMihaApr 7, 2023, 12:08 AM
21 points
20 comments1 min readLW link

Eric Sch­midt on re­cur­sive self-improvement

Nikola JurkovicNov 5, 2023, 7:05 PM
24 points
3 comments1 min readLW link
(www.youtube.com)

LLMs May Find It Hard to FOOM

RogerDearnaleyNov 15, 2023, 2:52 AM
11 points
30 comments12 min readLW link

The AI Ex­plo­sion Might Never Happen

snewmanSep 19, 2023, 11:20 PM
22 points
31 comments9 min readLW link

Con­crete vs Con­tex­tual values

whpearsonJun 2, 2009, 9:47 AM
−1 points
32 comments3 min readLW link

[Question] Would you join the So­ciety of the Free & Easy?

David GrossJul 10, 2019, 1:15 AM
18 points
1 comment3 min readLW link

AI self-im­prove­ment is possible

bhauthMay 23, 2023, 2:32 AM
18 points
3 comments8 min readLW link
No comments.