RSS

Com­put­ing Overhang

TagLast edit: Dec 30, 2024, 10:14 AM by Dakara

Computing Overhang is a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.

In the context of Artificial General Intelligence, this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.

Examples

In 2010, the President’s Council of Advisors on Science and Technology reported on benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM’s Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren’t deliberatly created with the goal of being modeled by AI.

As Yudkoswky puts it, human intelligence, created by this “blind” evolutionary process, has only recently developed the ability for planning and forward thinking—deliberation. On the other hand, the rest and almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. As such, when considering the design of complex systems where the designer—us—collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that’s completely different than the process that gave birth to our brains.

References

See also

Ta­boo “com­pute over­hang”

Zach Stein-PerlmanMar 1, 2023, 7:15 PM
21 points
8 comments1 min readLW link

Are we in an AI over­hang?

Andy JonesJul 27, 2020, 12:48 PM
266 points
106 comments4 min readLW link

Rele­vant pre-AGI possibilities

Daniel KokotajloJun 20, 2020, 10:52 AM
38 points
7 comments19 min readLW link
(aiimpacts.org)

How Much Com­pu­ta­tional Power Does It Take to Match the Hu­man Brain?

habrykaSep 12, 2020, 6:38 AM
44 points
1 comment1 min readLW link
(www.openphilanthropy.org)

A closer look at chess scal­ings (into the past)

hippkeJul 15, 2021, 8:13 AM
50 points
14 comments4 min readLW link

Brain-in­spired AGI and the “life­time an­chor”

Steven ByrnesSep 29, 2021, 1:09 PM
65 points
16 comments13 min readLW link

GPT-2005: A con­ver­sa­tion with ChatGPT (fea­tur­ing semi-func­tional Wolfram Alpha plu­gin!)

Lone PineMar 24, 2023, 2:03 PM
19 points
0 comments22 min readLW link

Mea­sur­ing hard­ware overhang

hippkeAug 5, 2020, 7:59 PM
115 points
14 comments4 min readLW link

AI over­hangs de­pend on whether al­gorithms, com­pute and data are sub­sti­tutes or complements

NathanBarnardDec 16, 2022, 2:23 AM
2 points
0 comments3 min readLW link

Thoughts on hard­ware /​ com­pute re­quire­ments for AGI

Steven ByrnesJan 24, 2023, 2:03 PM
59 points
30 comments24 min readLW link

Against “ar­gu­ment from over­hang risk”

RobertMMay 16, 2024, 4:44 AM
30 points
11 comments5 min readLW link

Sam Alt­man’s Chip Am­bi­tions Un­der­cut OpenAI’s Safety Strategy

garrisonFeb 10, 2024, 7:52 PM
198 points
52 comments1 min readLW link
(garrisonlovely.substack.com)

Be­fore smart AI, there will be many mediocre or spe­cial­ized AIs

Lukas FinnvedenMay 26, 2023, 1:38 AM
57 points
14 comments9 min readLW link1 review

Ad­dress­ing doubts of AI progress: Why GPT-5 is not late, and why data scarcity isn’t a fun­da­men­tal limiter near term.

LDJJan 17, 2025, 6:53 PM
2 points
0 comments2 min readLW link

The 0.2 OOMs/​year target

Cleo NardoMar 30, 2023, 6:15 PM
84 points
24 comments5 min readLW link

Are There Ex­am­ples of Over­hang for Other Tech­nolo­gies?

Jeffrey HeningerDec 13, 2023, 9:48 PM
59 points
50 comments11 min readLW link
(blog.aiimpacts.org)

In­fer­ence cost limits the im­pact of ever larger models

SoerenMindOct 23, 2021, 10:51 AM
42 points
29 comments2 min readLW link

How should Deep­Mind’s Chin­chilla re­vise our AI fore­casts?

Cleo NardoSep 15, 2022, 5:54 PM
35 points
12 comments13 min readLW link

We are headed into an ex­treme com­pute overhang

devrandomApr 26, 2024, 9:38 PM
53 points
34 comments2 min readLW link
No comments.