RSS

In­side/​Out­side View

TagLast edit: 16 Nov 2021 15:13 UTC by Yoav Ravid

An Inside View on a topic involves making predictions based on your understanding of the details of the process. An Outside View involves ignoring these details and using an estimate based on a class of roughly similar previous cases (alternatively, this is called reference class forecasting), though it has been pointed out that the possible meaning has expanded beyond that.

For example, someone working on a project may estimate that they can reasonably get 20% of it done per day, so they will get it done in five days (inside view). Or they might consider that all of their previous projects were completed just before the deadline, so since the deadline for this project is in 30 days, that’s when it will get done (outside view).

The terms were originally developed by Daniel Kahneman and Amos Tversky. An early use is in Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk Taking (Kahneman & Lovallo, 1993) and the terms were popularised in Thinking, Fast and Slow (Kahneman, 2011; relevant excerpt). The planning example is discussed in The Planning Fallacy.

Examples of outside view

1. From Beware the Inside View, by Robin Hanson:

I did 1500 piece jigsaw puzzle of fireworks, my first jigsaw in at least ten years. Several times I had the strong impression that I had carefully eliminated every possible place a piece could go, or every possible piece that could go in a place. I was very tempted to conclude that many pieces were missing, or that the box had extra pieces from another puzzle. This wasn’t impossible – the puzzle was an open box a relative had done before. And the alternative seemed humiliating.

But I allowed a very different part of my mind, using different considerations, to overrule this judgment; so many extra or missing pieces seemed unlikely. And in the end there was only one missing and no extra pieces. I recall a similar experience when I was learning to program. I would carefully check my program and find no errors, and then when my program wouldn’t run I was tempted to suspect compiler or hardware errors. Of course the problem was almost always my fault.

2. Japanese students expected to finish their essays an average of 10 days before deadline. The average completion time was actually 1 day before deadline. When asked when they’d completed similar, previous tasks, the average reply was 1 day before deadline[1].

3. Students instructed to visualize how, where, and when they would perform their Christmas shopping, expected to finish shopping more than a week before Christmas. A control group asked when they expected their Christmas shopping to be finished, expected it to be done 4 days before Christmas. Both groups finished 3 days before Christmas[2].

Problems with the outside view

It is controversial how far the lesson of these experiments can be extended. Robin Hanson argues that this implies that, in futurism, forecasts should be made by trying to find a reference class of similar cases, rather than by trying to visualize outcomes. Eliezer Yudkowsky responds that this leads to “reference class tennis” wherein people feel that the same event ‘obviously’ belongs to two different reference classes, and that the above experiments were performed in cases where the new example was highly similar to past examples. I.e., this year’s Christmas shopping optimism and last year’s Christmas shopping optimism are much more similar to one another, than the invention of the Internet is to the invention of agriculture. If someone else then feels that the invention of the Internet is more like the category ‘recent communications innovations’ and should be forecast by reference to television instead of agriculture, both sides pleading the outside view has no resolution except “I’m taking my reference class and going home!”

More possible limitations and problems with using the outside view are discussed in The Outside View’s Domain and “Outside View” as Conversation-Halter. Model Combination and Adjustment discusses the implications of there usually existing multiple different outside views. Taboo “Outside View” argues that the meaning of “Outside View” have expanded too much, and that it should be tabooed and replaced with more precise terminology. An alternative to “inside/​outside view” has been proposed in Gears Level & Policy Level.

External Posts

See Also

[1] Buehler, R., Griffin, D., & Ross, M. 2002. Inside the planning fallacy: The causes and consequences of optimistic time predictions. Heuristics and biases: The psychology of intuitive judgment, 250-270. Cambridge, UK: Cambridge University Press.

[2] Buehler, R., Griffin, D. and Ross, M. 1995. It’s about time: Optimistic predictions in work and love. European Review of Social Psychology, Volume 6, eds. W. Stroebe and M. Hewstone. Chichester: John Wiley & Sons.

Ta­boo “Out­side View”

Daniel Kokotajlo17 Jun 2021 9:36 UTC
350 points
33 comments8 min readLW link3 reviews

Con­fi­dence lev­els in­side and out­side an argument

Scott Alexander16 Dec 2010 3:06 UTC
234 points
192 comments6 min readLW link

“Out­side View!” as Con­ver­sa­tion-Halter

Eliezer Yudkowsky24 Feb 2010 5:53 UTC
93 points
103 comments7 min readLW link

What cog­ni­tive bi­ases feel like from the inside

chaosmage3 Jan 2020 14:24 UTC
253 points
32 comments4 min readLW link

The Out­side View’s Domain

Eliezer Yudkowsky21 Jun 2008 3:17 UTC
29 points
14 comments10 min readLW link

Mul­ti­tudi­nous out­side views

Davidmanheim18 Aug 2020 6:21 UTC
55 points
13 comments3 min readLW link

Model Com­bi­na­tion and Adjustment

lukeprog17 Jul 2013 20:31 UTC
102 points
41 comments5 min readLW link

The Weak In­side View

Eliezer Yudkowsky18 Nov 2008 18:37 UTC
31 points
22 comments5 min readLW link

Hero Licensing

Eliezer Yudkowsky21 Nov 2017 21:13 UTC
239 points
83 comments52 min readLW link

Cor­rigi­bil­ity as out­side view

TurnTrout8 May 2020 21:56 UTC
36 points
11 comments4 min readLW link

Mis­takes with Con­ser­va­tion of Ex­pected Evidence

abramdemski8 Jun 2019 23:07 UTC
232 points
27 comments12 min readLW link1 review

Gears Level & Policy Level

abramdemski24 Nov 2017 7:17 UTC
61 points
8 comments7 min readLW link

Be less scared of overconfidence

benkuhn30 Nov 2022 15:20 UTC
163 points
22 comments9 min readLW link
(www.benkuhn.net)

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

25 Nov 2021 16:45 UTC
119 points
95 comments68 min readLW link

Time­less Modesty?

abramdemski24 Nov 2017 11:12 UTC
17 points
2 comments3 min readLW link

Refer­ence class of the unclassreferenceable

taw8 Jan 2010 4:13 UTC
25 points
154 comments1 min readLW link

Plan­ning Fallacy

Eliezer Yudkowsky17 Sep 2007 7:06 UTC
180 points
43 comments3 min readLW link

The Out­side View isn’t magic

Stuart_Armstrong27 Sep 2017 14:33 UTC
21 points
4 comments6 min readLW link

Out­side View(s) and MIRI’s FAI Endgame

Wei Dai28 Aug 2013 23:27 UTC
21 points
60 comments2 min readLW link

Out­side View as the Main De­bi­as­ing Technique

abramdemski16 Oct 2017 21:53 UTC
8 points
4 comments2 min readLW link

In defense of the out­side view

cousin_it15 Jan 2010 11:01 UTC
18 points
29 comments2 min readLW link

Dis­cus­sion: weight­ing in­side view ver­sus out­side view on ex­tinc­tion events

Ilverin the Stupid and Offensive25 Feb 2016 5:18 UTC
5 points
4 comments1 min readLW link

Avoid mis­in­ter­pret­ing your emotions

Kaj_Sotala14 Feb 2012 23:51 UTC
92 points
31 comments7 min readLW link

Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

abramdemski16 Feb 2018 0:44 UTC
86 points
36 comments18 min readLW link1 review

The Prob­le­matic Third Per­son Perspective

abramdemski5 Oct 2017 20:44 UTC
28 points
3 comments4 min readLW link

Sus­pi­ciously bal­anced evidence

gjm12 Feb 2020 17:04 UTC
50 points
24 comments4 min readLW link

Nav­i­gat­ing dis­agree­ment: How to keep your eye on the ev­i­dence

AnnaSalamon24 Apr 2010 22:47 UTC
47 points
73 comments6 min readLW link

Com­ment on SSC’s Re­view of Inad­e­quate Equilibria

Ben Pace1 Dec 2017 11:46 UTC
13 points
5 comments2 min readLW link

In­side View, Out­side View… And Op­pos­ing View

chaosmage20 Dec 2023 12:35 UTC
21 points
1 comment5 min readLW link

In­duc­tion; or, the rules and eti­quette of refer­ence class tennis

paulfchristiano3 Mar 2013 23:27 UTC
11 points
8 comments9 min readLW link

[Question] What is the right phrase for “the­o­ret­i­cal ev­i­dence”?

Adam Zerner1 Nov 2020 20:43 UTC
23 points
41 comments2 min readLW link

Tak­ing the out­side view on code quality

Adam Zerner7 May 2021 4:16 UTC
11 points
17 comments2 min readLW link

AXRP Epi­sode 7.5 - Fore­cast­ing Trans­for­ma­tive AI from Biolog­i­cal An­chors with Ajeya Cotra

DanielFilan28 May 2021 0:20 UTC
24 points
1 comment67 min readLW link

Sur­face Analo­gies and Deep Causes

Eliezer Yudkowsky22 Jun 2008 7:51 UTC
38 points
33 comments5 min readLW link

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Eliezer Yudkowsky22 Nov 2021 19:35 UTC
205 points
176 comments60 min readLW link1 review

How I Formed My Own Views About AI Safety

Neel Nanda27 Feb 2022 18:50 UTC
64 points
6 comments13 min readLW link
(www.neelnanda.io)

Con­crete Ad­vice for Form­ing In­side Views on AI Safety

Neel Nanda17 Aug 2022 22:02 UTC
29 points
6 comments10 min readLW link

On In­ves­ti­gat­ing Con­spir­acy Theories

Zvi20 Feb 2023 12:50 UTC
116 points
38 comments5 min readLW link
(thezvi.wordpress.com)

How To Be More Con­fi­dent… That You’re Wrong

Wei Dai22 May 2011 23:30 UTC
38 points
25 comments1 min readLW link

What’s the fu­ture of AI hard­ware?

Itay Dreyfus17 Jun 2024 13:05 UTC
2 points
0 comments8 min readLW link
(productidentity.co)

In­stru­men­tal Ra­tion­al­ity 2: Plan­ning 101

lifelonglearner6 Oct 2017 14:23 UTC
17 points
4 comments14 min readLW link

Ex­ter­nal ra­tio­nal­ity vs. in­ter­nal rationality

metachirality2 Aug 2023 23:29 UTC
7 points
0 comments1 min readLW link

Do we have a plan for the “first crit­i­cal try” prob­lem?

Christopher King3 Apr 2023 16:27 UTC
−3 points
14 comments1 min readLW link

For­mal­iz­ing the “AI x-risk is un­likely be­cause it is ridicu­lous” argument

Christopher King3 May 2023 18:56 UTC
48 points
17 comments3 min readLW link

Are You Anosog­nosic?

Eliezer Yudkowsky19 Jul 2009 4:35 UTC
20 points
67 comments1 min readLW link

Plac­ing Your­self as an In­stance of a Class

abramdemski3 Oct 2017 19:10 UTC
36 points
5 comments3 min readLW link

What Makes My At­tempt Spe­cial?

Andy_McKenzie26 Sep 2010 6:55 UTC
43 points
22 comments2 min readLW link

[Question] What is a rea­son­able out­side view for the fate of so­cial move­ments?

jacobjacob4 Jan 2019 0:21 UTC
33 points
27 comments1 min readLW link

[Question] Best ar­gu­ments against the out­side view that AGI won’t be a huge deal, thus we sur­vive.

Noosphere8927 Mar 2023 20:49 UTC
4 points
7 comments1 min readLW link

Kah­ne­man’s Plan­ning Anecdote

Eliezer Yudkowsky17 Sep 2007 16:39 UTC
38 points
8 comments2 min readLW link

Against Modest Epistemology

Eliezer Yudkowsky14 Nov 2017 20:40 UTC
71 points
48 comments15 min readLW link

But What’s Your *New Align­ment In­sight,* out of a Fu­ture-Text­book Para­graph?

David Udell7 May 2022 3:10 UTC
26 points
18 comments5 min readLW link

[Question] Ac­cu­racy of ar­gu­ments that are seen as ridicu­lous and in­tu­itively false but don’t have good counter-arguments

Christopher King29 Apr 2023 23:58 UTC
30 points
39 comments1 min readLW link

An Out­side View on Less Wrong’s Advice

Mass_Driver7 Jul 2011 4:46 UTC
84 points
162 comments8 min readLW link

Man­i­fold Pre­dicted the AI Ex­tinc­tion State­ment and CAIS Wanted it Deleted

David Chee12 Jun 2023 15:54 UTC
71 points
15 comments12 min readLW link

Trust­ing Ex­pert Consensus

ChrisHallquist16 Oct 2013 20:22 UTC
41 points
81 comments17 min readLW link