RSS

In­side/​Out­side View

TagLast edit: Nov 16, 2021, 3:13 PM by Yoav Ravid

An Inside View on a topic involves making predictions based on your understanding of the details of the process. An Outside View involves ignoring these details and using an estimate based on a class of roughly similar previous cases (alternatively, this is called reference class forecasting), though it has been pointed out that the possible meaning has expanded beyond that.

For example, someone working on a project may estimate that they can reasonably get 20% of it done per day, so they will get it done in five days (inside view). Or they might consider that all of their previous projects were completed just before the deadline, so since the deadline for this project is in 30 days, that’s when it will get done (outside view).

The terms were originally developed by Daniel Kahneman and Amos Tversky. An early use is in Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk Taking (Kahneman & Lovallo, 1993) and the terms were popularised in Thinking, Fast and Slow (Kahneman, 2011; relevant excerpt). The planning example is discussed in The Planning Fallacy.

Examples of outside view

1. From Beware the Inside View, by Robin Hanson:

I did 1500 piece jigsaw puzzle of fireworks, my first jigsaw in at least ten years. Several times I had the strong impression that I had carefully eliminated every possible place a piece could go, or every possible piece that could go in a place. I was very tempted to conclude that many pieces were missing, or that the box had extra pieces from another puzzle. This wasn’t impossible – the puzzle was an open box a relative had done before. And the alternative seemed humiliating.

But I allowed a very different part of my mind, using different considerations, to overrule this judgment; so many extra or missing pieces seemed unlikely. And in the end there was only one missing and no extra pieces. I recall a similar experience when I was learning to program. I would carefully check my program and find no errors, and then when my program wouldn’t run I was tempted to suspect compiler or hardware errors. Of course the problem was almost always my fault.

2. Japanese students expected to finish their essays an average of 10 days before deadline. The average completion time was actually 1 day before deadline. When asked when they’d completed similar, previous tasks, the average reply was 1 day before deadline[1].

3. Students instructed to visualize how, where, and when they would perform their Christmas shopping, expected to finish shopping more than a week before Christmas. A control group asked when they expected their Christmas shopping to be finished, expected it to be done 4 days before Christmas. Both groups finished 3 days before Christmas[2].

Problems with the outside view

It is controversial how far the lesson of these experiments can be extended. Robin Hanson argues that this implies that, in futurism, forecasts should be made by trying to find a reference class of similar cases, rather than by trying to visualize outcomes. Eliezer Yudkowsky responds that this leads to “reference class tennis” wherein people feel that the same event ‘obviously’ belongs to two different reference classes, and that the above experiments were performed in cases where the new example was highly similar to past examples. I.e., this year’s Christmas shopping optimism and last year’s Christmas shopping optimism are much more similar to one another, than the invention of the Internet is to the invention of agriculture. If someone else then feels that the invention of the Internet is more like the category ‘recent communications innovations’ and should be forecast by reference to television instead of agriculture, both sides pleading the outside view has no resolution except “I’m taking my reference class and going home!”

More possible limitations and problems with using the outside view are discussed in The Outside View’s Domain and “Outside View” as Conversation-Halter. Model Combination and Adjustment discusses the implications of there usually existing multiple different outside views. Taboo “Outside View” argues that the meaning of “Outside View” have expanded too much, and that it should be tabooed and replaced with more precise terminology. An alternative to “inside/​outside view” has been proposed in Gears Level & Policy Level.

External Posts

See Also

[1] Buehler, R., Griffin, D., & Ross, M. 2002. Inside the planning fallacy: The causes and consequences of optimistic time predictions. Heuristics and biases: The psychology of intuitive judgment, 250-270. Cambridge, UK: Cambridge University Press.

[2] Buehler, R., Griffin, D. and Ross, M. 1995. It’s about time: Optimistic predictions in work and love. European Review of Social Psychology, Volume 6, eds. W. Stroebe and M. Hewstone. Chichester: John Wiley & Sons.

Ta­boo “Out­side View”

Daniel KokotajloJun 17, 2021, 9:36 AM
351 points
33 comments8 min readLW link3 reviews

Con­fi­dence lev­els in­side and out­side an argument

Scott AlexanderDec 16, 2010, 3:06 AM
236 points
192 comments6 min readLW link

“Out­side View!” as Con­ver­sa­tion-Halter

Eliezer YudkowskyFeb 24, 2010, 5:53 AM
93 points
103 comments7 min readLW link

What cog­ni­tive bi­ases feel like from the inside

chaosmageJan 3, 2020, 2:24 PM
256 points
32 comments4 min readLW link

The Weak In­side View

Eliezer YudkowskyNov 18, 2008, 6:37 PM
31 points
22 comments5 min readLW link

Model Com­bi­na­tion and Adjustment

lukeprogJul 17, 2013, 8:31 PM
102 points
41 comments5 min readLW link

Mul­ti­tudi­nous out­side views

DavidmanheimAug 18, 2020, 6:21 AM
55 points
13 comments3 min readLW link

Hero Licensing

Eliezer YudkowskyNov 21, 2017, 9:13 PM
241 points
83 comments52 min readLW link

The Out­side View’s Domain

Eliezer YudkowskyJun 21, 2008, 3:17 AM
29 points
14 comments10 min readLW link

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

Nov 25, 2021, 4:45 PM
119 points
95 comments68 min readLW link

Gears Level & Policy Level

abramdemskiNov 24, 2017, 7:17 AM
63 points
8 comments7 min readLW link

Mis­takes with Con­ser­va­tion of Ex­pected Evidence

abramdemskiJun 8, 2019, 11:07 PM
232 points
29 comments12 min readLW link1 review

Be less scared of overconfidence

benkuhnNov 30, 2022, 3:20 PM
163 points
22 comments9 min readLW link
(www.benkuhn.net)

Cor­rigi­bil­ity as out­side view

TurnTroutMay 8, 2020, 9:56 PM
36 points
11 comments4 min readLW link

How I Formed My Own Views About AI Safety

Neel NandaFeb 27, 2022, 6:50 PM
65 points
6 comments13 min readLW link
(www.neelnanda.io)

Refer­ence class of the unclassreferenceable

tawJan 8, 2010, 4:13 AM
25 points
154 comments1 min readLW link

Plan­ning Fallacy

Eliezer YudkowskySep 17, 2007, 7:06 AM
187 points
43 comments3 min readLW link

The Out­side View isn’t magic

Stuart_ArmstrongSep 27, 2017, 2:33 PM
21 points
4 comments6 min readLW link

Out­side View(s) and MIRI’s FAI Endgame

Wei DaiAug 28, 2013, 11:27 PM
21 points
60 comments2 min readLW link

Out­side View as the Main De­bi­as­ing Technique

abramdemskiOct 16, 2017, 9:53 PM
8 points
4 comments2 min readLW link

In defense of the out­side view

cousin_itJan 15, 2010, 11:01 AM
18 points
29 comments2 min readLW link

Dis­cus­sion: weight­ing in­side view ver­sus out­side view on ex­tinc­tion events

Ilverin the Stupid and OffensiveFeb 25, 2016, 5:18 AM
5 points
4 comments1 min readLW link

Avoid mis­in­ter­pret­ing your emotions

Kaj_SotalaFeb 14, 2012, 11:51 PM
94 points
31 comments7 min readLW link

Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

abramdemskiFeb 16, 2018, 12:44 AM
92 points
36 comments18 min readLW link1 review

The Prob­le­matic Third Per­son Perspective

abramdemskiOct 5, 2017, 8:44 PM
28 points
3 comments4 min readLW link

Sus­pi­ciously bal­anced evidence

gjmFeb 12, 2020, 5:04 PM
50 points
24 comments4 min readLW link

Nav­i­gat­ing dis­agree­ment: How to keep your eye on the ev­i­dence

AnnaSalamonApr 24, 2010, 10:47 PM
47 points
73 comments6 min readLW link

Com­ment on SSC’s Re­view of Inad­e­quate Equilibria

Ben PaceDec 1, 2017, 11:46 AM
13 points
5 comments2 min readLW link

Time­less Modesty?

abramdemskiNov 24, 2017, 11:12 AM
17 points
2 comments3 min readLW link

In­duc­tion; or, the rules and eti­quette of refer­ence class tennis

paulfchristianoMar 3, 2013, 11:27 PM
11 points
8 comments9 min readLW link

[Question] What is the right phrase for “the­o­ret­i­cal ev­i­dence”?

Adam ZernerNov 1, 2020, 8:43 PM
23 points
41 comments2 min readLW link

Tak­ing the out­side view on code quality

Adam ZernerMay 7, 2021, 4:16 AM
11 points
17 comments2 min readLW link

AXRP Epi­sode 7.5 - Fore­cast­ing Trans­for­ma­tive AI from Biolog­i­cal An­chors with Ajeya Cotra

DanielFilanMay 28, 2021, 12:20 AM
24 points
1 comment67 min readLW link

Sur­face Analo­gies and Deep Causes

Eliezer YudkowskyJun 22, 2008, 7:51 AM
38 points
33 comments5 min readLW link

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Eliezer YudkowskyNov 22, 2021, 7:35 PM
208 points
176 comments60 min readLW link1 review

In­side View, Out­side View… And Op­pos­ing View

chaosmageDec 20, 2023, 12:35 PM
21 points
1 comment5 min readLW link

Con­crete Ad­vice for Form­ing In­side Views on AI Safety

Neel NandaAug 17, 2022, 10:02 PM
29 points
6 comments10 min readLW link

On In­ves­ti­gat­ing Con­spir­acy Theories

ZviFeb 20, 2023, 12:50 PM
116 points
38 comments5 min readLW link
(thezvi.wordpress.com)

[Question] Best ar­gu­ments against the out­side view that AGI won’t be a huge deal, thus we sur­vive.

Noosphere89Mar 27, 2023, 8:49 PM
4 points
7 comments1 min readLW link

In­stru­men­tal Ra­tion­al­ity 2: Plan­ning 101

lifelonglearnerOct 6, 2017, 2:23 PM
17 points
4 comments14 min readLW link

Ex­ter­nal ra­tio­nal­ity vs. in­ter­nal rationality

metachiralityAug 2, 2023, 11:29 PM
7 points
0 comments1 min readLW link

[Question] Ac­cu­racy of ar­gu­ments that are seen as ridicu­lous and in­tu­itively false but don’t have good counter-arguments

Christopher KingApr 29, 2023, 11:58 PM
30 points
39 comments1 min readLW link

Are You Anosog­nosic?

Eliezer YudkowskyJul 19, 2009, 4:35 AM
20 points
67 comments1 min readLW link

Plac­ing Your­self as an In­stance of a Class

abramdemskiOct 3, 2017, 7:10 PM
36 points
5 comments3 min readLW link

What Makes My At­tempt Spe­cial?

Andy_McKenzieSep 26, 2010, 6:55 AM
43 points
22 comments2 min readLW link

[Question] What is a rea­son­able out­side view for the fate of so­cial move­ments?

Bird ConceptJan 4, 2019, 12:21 AM
33 points
27 comments1 min readLW link

Do we have a plan for the “first crit­i­cal try” prob­lem?

Christopher KingApr 3, 2023, 4:27 PM
−3 points
14 comments1 min readLW link

Kah­ne­man’s Plan­ning Anecdote

Eliezer YudkowskySep 17, 2007, 4:39 PM
38 points
8 comments2 min readLW link

For­mal­iz­ing the “AI x-risk is un­likely be­cause it is ridicu­lous” argument

Christopher KingMay 3, 2023, 6:56 PM
48 points
17 comments3 min readLW link

But What’s Your *New Align­ment In­sight,* out of a Fu­ture-Text­book Para­graph?

David UdellMay 7, 2022, 3:10 AM
26 points
18 comments5 min readLW link

Against Modest Epistemology

Eliezer YudkowskyNov 14, 2017, 8:40 PM
71 points
48 comments15 min readLW link

An Out­side View on Less Wrong’s Advice

Mass_DriverJul 7, 2011, 4:46 AM
84 points
162 comments8 min readLW link

Man­i­fold Pre­dicted the AI Ex­tinc­tion State­ment and CAIS Wanted it Deleted

David CheeJun 12, 2023, 3:54 PM
71 points
15 comments12 min readLW link

Trust­ing Ex­pert Consensus

ChrisHallquistOct 16, 2013, 8:22 PM
41 points
81 comments17 min readLW link

How To Be More Con­fi­dent… That You’re Wrong

Wei DaiMay 22, 2011, 11:30 PM
38 points
25 comments1 min readLW link