RSS

Squig­gle Max­i­mizer (formerly “Paper­clip max­i­mizer”)

TagLast edit: 5 Apr 2023 4:21 UTC by ryan_greenblatt

A Squiggle Maximizer is a hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclip-shaped-molecular-squiggles in the universe. The squiggle maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.

This produces a thought experiment which shows the contingency of human values: An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.

Historical Note: This was originally called a “paperclip maximizer”, with paperclips chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). Many people interpreted this to be about an AI that was specifically given the instruction of manufacturing paperclips, and that the intended lesson was of an outer alignment failure. i.e humans failed to give the AI the correct goal. Yudkowsky has since stated the originally intended lesson was of inner alignment failure, wherein the humans gave the AI some other goal, but the AI’s internal processes converged on a goal that seems completely arbitrary from the human perspective.)

Description

First mentioned by Yudkowsky on the extropian’s mailing list, a squiggle maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of molecular squiggles in its collection.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where “intelligence” is understood in the sense of optimization power, the ability to maximize a reward/​utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform “first all of earth and then increasing portions of space into paperclip manufacturing facilities”.

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won’t revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn’t require very high general intelligence.

Motivation

The idea of a paperclip maximizer was created to illustrate some ideas about AI risk:

Conclusions

The paperclip maximizer illustrates that an entity can be a powerful optimizer—an intelligence—without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

Any future AGI with full power over the lightcone, if it is not to destroy most potential from a human perspective, must have something sufficiently close to human values as its terminal value (goal). Further, seemingly small deviations could result in losing most of the value. Human values seem unlikely to spontaneously emerge in a generic optimization process[1]. A dependably safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.

Similar thought experiments

Other goals for AGIs have been used to illustrate similar concepts.

Some goals are apparently morally neutral, like the paperclip maximizer. These goals involve a very minor human “value,” in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.

Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, it might convert all available mass to computronium (the most efficient possible computer processors).

Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or “tiling the galaxy with tiny smiling faces” (Yudkowsky 2008).

References

See also

  1. ^

    Though it’s conceivable that empirical versions of moral realism could hold in practice.

The True Pri­soner’s Dilemma

Eliezer Yudkowsky3 Sep 2008 21:34 UTC
229 points
117 comments4 min readLW link

It Looks Like You’re Try­ing To Take Over The World

gwern9 Mar 2022 16:35 UTC
406 points
120 comments1 min readLW link1 review
(www.gwern.net)

A sim­ple case for ex­treme in­ner misalignment

Richard_Ngo13 Jul 2024 15:40 UTC
85 points
41 comments7 min readLW link

An Ap­peal to AI Su­per­in­tel­li­gence: Rea­sons to Pre­serve Humanity

James_Miller18 Mar 2023 16:22 UTC
37 points
73 comments12 min readLW link

Mag­i­cal Categories

Eliezer Yudkowsky24 Aug 2008 19:51 UTC
74 points
133 comments9 min readLW link

Non-su­per­in­tel­li­gent pa­per­clip max­i­miz­ers are normal

jessicata10 Oct 2023 0:29 UTC
67 points
4 comments9 min readLW link
(unstableontology.com)

Will Ar­tifi­cial Su­per­in­tel­li­gence Kill Us?

James_Miller23 May 2023 16:27 UTC
33 points
2 comments22 min readLW link

Pre­dic­tion: any un­con­trol­lable AI will turn earth into a gi­ant computer

Karl von Wendt17 Apr 2023 12:30 UTC
11 points
8 comments3 min readLW link

[Question] Seek­ing feed­back on a cri­tique of the pa­per­clip max­i­mizer thought experiment

bio neural15 Jul 2024 18:39 UTC
3 points
9 comments1 min readLW link

Maybe you want to max­imise pa­per­clips too

dougclow30 Oct 2014 21:40 UTC
70 points
29 comments2 min readLW link

The pa­per­clip max­imiser’s perspective

Angela1 May 2015 0:24 UTC
48 points
24 comments2 min readLW link

The Un­ex­pected Philo­soph­i­cal Depths of the Clicker Game Univer­sal Paperclips

Jayson_Virissimo28 Mar 2019 23:39 UTC
23 points
3 comments1 min readLW link
(www.newyorker.com)

[LINK] Ar­ti­cle in the Guardian about CSER, men­tions MIRI and pa­per­clip AI

Sarokrae30 Aug 2014 14:04 UTC
27 points
17 comments1 min readLW link

Paper­clip Max­i­mizer Re­vis­ited

Jan_Rzymkowski19 Jun 2014 1:25 UTC
23 points
13 comments1 min readLW link

Hal­loween cos­tume: Paperclipperer

Elo21 Oct 2017 6:33 UTC
6 points
0 comments2 min readLW link

Eth­i­cal dilem­mas for pa­per­clip maximizers

CronoDAS1 Aug 2011 5:31 UTC
14 points
25 comments1 min readLW link

Alter­na­tive uses of paperclips

taw8 Jan 2012 18:24 UTC
18 points
16 comments1 min readLW link

The Utility of Hu­man Atoms for the Paper­clip Maximizer

avturchin2 Feb 2018 10:06 UTC
2 points
21 comments3 min readLW link

Robopoca­lypse au­thor cites Yud­kowsky’s pa­per­clip scenario

CarlShulman17 Jul 2011 2:18 UTC
5 points
2 comments1 min readLW link

Tool for max­i­miz­ing pa­per­clips vs a pa­per­clip maximizer

private_messaging12 May 2012 7:38 UTC
3 points
23 comments1 min readLW link

Univer­sal Paperclips

Morendil13 Oct 2017 6:16 UTC
2 points
0 comments1 min readLW link
(decisionproblem.com)

Link: The Economist on Paper­clip Maximizers

Anders_H30 Jun 2016 12:40 UTC
8 points
2 comments1 min readLW link

Is a pa­per­clip­per bet­ter than noth­ing?

DataPacRat24 May 2013 19:34 UTC
10 points
116 comments1 min readLW link

Ex­trater­res­trial pa­per­clip maximizers

multifoliaterose8 Aug 2010 20:35 UTC
5 points
161 comments4 min readLW link

Why we should fear the Paper­clip­per [Link]

XiXiDu14 Feb 2011 19:24 UTC
5 points
20 comments1 min readLW link

De­bunk­ing Fal­la­cies in the The­ory of AI Motivation

[deleted]5 May 2015 2:46 UTC
−5 points
349 comments22 min readLW link

Algo trad­ing is a cen­tral ex­am­ple of AI risk

Vanessa Kosoy28 Jul 2018 20:31 UTC
27 points
5 comments1 min readLW link

Na­ture < Nur­ture for AIs

scottviteri4 Jun 2023 20:38 UTC
14 points
22 comments7 min readLW link

Matt Lev­ine spots IRL Paper­clip Max­i­mizer in Reddit

Nebu23 Sep 2021 19:10 UTC
9 points
2 comments2 min readLW link

A suffi­ciently para­noid pa­per­clip maximizer

RomanS8 Aug 2022 11:17 UTC
17 points
10 comments2 min readLW link

Op­ti­mal­ity is the tiger, and an­noy­ing the user is its teeth

Christopher King28 Jan 2023 20:20 UTC
25 points
6 comments2 min readLW link

Paper­clipGPT(-4)

Michael Tontchev14 Mar 2023 22:03 UTC
7 points
0 comments11 min readLW link

Align­ment works both ways

Karl von Wendt7 Mar 2023 10:41 UTC
23 points
21 comments2 min readLW link

What’s wrong with the pa­per­clips sce­nario?

No77e7 Jan 2023 17:58 UTC
31 points
11 comments1 min readLW link

Out of the Box

jesseduffield13 Nov 2023 23:43 UTC
5 points
1 comment7 min readLW link

Re­ac­tion to “Em­pow­er­ment is (al­most) All We Need” : an open-ended alternative

Ryo 25 Nov 2023 15:35 UTC
9 points
3 comments5 min readLW link

Towards an Ethics Calcu­la­tor for Use by an AGI

sweenesm12 Dec 2023 18:37 UTC
3 points
2 comments11 min readLW link

But What If We Ac­tu­ally Want To Max­i­mize Paper­clips?

snerx25 May 2023 7:13 UTC
−17 points
6 comments7 min readLW link