ArchiveSequencesAbout
QuestionsEventsShortformAlignment ForumAF Comments
HomeFeaturedAllTagsRecent Comments

Anchoring and Adjustment

Eliezer YudkowskySep 7, 2007, 9:33 PM
84 points
22 comments2 min readLW linkArchive
PrimingAnchoringHeuristics & Biases
Post permalinkLink without commentsLink without top nav barsLink without comments or top nav bars

Suppose I spin a Wheel of Fortune device as you watch, and it comes up pointing to 65. Then I ask: Do you think the percentage of countries in the United Nations that are in Africa is above or below this number? What do you think is the percentage of UN countries that are in Africa? Take a moment to consider these two questions yourself, if you like, and please don’t Google.

Also, try to guess, within five seconds, the value of the following arithmetical expression. Five seconds. Ready? Set . . . Go!

1 × 2 × 3 × 4 × 5 × 6 × 7 × 8

Tversky and Kahneman recorded the estimates of subjects who saw the Wheel of Fortune showing various numbers.1 The median estimate of subjects who saw the wheel show 65 was 45%; the median estimate of subjects who saw 10 was 25%.

The current theory for this and similar experiments is that subjects take the initial, uninformative number as their starting point or anchor; and then they adjust upward or downward from their starting estimate until they reach an answer that “sounds plausible”; and then they stop adjusting. This typically results in under-adjustment from the anchor—more distant numbers could also be “plausible,” but one stops at the first satisfying-sounding answer.

Similarly, students shown “1 × 2 × 3 × 4 × 5 × 6 × 7 × 8” made a median estimate of 512, while students shown “8 × 7 × 6 × 5 × 4 × 3 × 2 × 1” made a median estimate of 2,250. The motivating hypothesis was that students would try to multiply (or guess-combine) the first few factors of the product, then adjust upward. In both cases the adjustments were insufficient, relative to the true value of 40,320; but the first set of guesses were much more insufficient because they started from a lower anchor.

Tversky and Kahneman report that offering payoffs for accuracy did not reduce the anchoring effect.

Strack and Mussweiler asked for the year Einstein first visited the United States.2 Completely implausible anchors, such as 1215 or 1992, produced anchoring effects just as large as more plausible anchors such as 1905 or 1939.

There are obvious applications in, say, salary negotiations, or buying a car. I won’t suggest that you exploit it, but watch out for exploiters.

And watch yourself thinking, and try to notice when you are adjusting a figure in search of an estimate.

Debiasing manipulations for anchoring have generally proved not very effective. I would suggest these two: First, if the initial guess sounds implausible, try to throw it away entirely and come up with a new estimate, rather than sliding from the anchor. But this in itself may not be sufficient—subjects instructed to avoid anchoring still seem to do so.3 So, second, even if you are trying the first method, try also to think of an anchor in the opposite direction—an anchor that is clearly too small or too large, instead of too large or too small—and dwell on it briefly.

1Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1974): 1124–1131.

2Fritz Strack and Thomas Mussweiler, “Explaining the Enigmatic Anchoring Effect: Mechanisms of Selective Accessibility,” Journal of Personality and Social Psychology 73, no. 3 (1997): 437–446.

3George A. Quattrone et al., “Explorations in Anchoring: The Effects of Prior Range, Anchor Extremity, and Suggestive Hints” (Unpublished manuscript, Stanford University, 1981).

  • Eliezer’s Se­quences and Main­stream Academia by lukeprog (Sep 15, 2012, 12:32 AM; 245 points)
  • Philos­o­phy: A Diseased Discipline by lukeprog (Mar 28, 2011, 7:31 PM; 150 points)
  • Ta­lent Needs of Tech­ni­cal AI Safety Teams by yams (May 24, 2024, 12:36 AM; 117 points)
  • What are the op­ti­mal bi­ases to over­come? by aaronsw (Aug 4, 2012, 3:04 PM; 103 points)
  • Blue- and Yel­low-Tinted Choices by Scott Alexander (May 13, 2010, 10:35 PM; 75 points)
  • So You Think You’re a Bayesian? The Nat­u­ral Mode of Prob­a­bil­is­tic Reasoning by Matt_Simpson (Jul 14, 2010, 4:51 PM; 66 points)
  • Par­ti­ci­pa­tion in the LW Com­mu­nity As­so­ci­ated with Less Bias by Unnamed (Dec 9, 2012, 12:15 PM; 56 points)
  • Ta­lent Needs of Tech­ni­cal AI Safety Teams by Ryan Kidd (EA Forum; May 24, 2024, 12:46 AM; 51 points)
  • Book Re­view: Heuris­tics and Bi­ases (MIRI course list) by So8res (Sep 2, 2013, 3:37 PM; 49 points)
  • Self-Anchoring by Eliezer Yudkowsky (Oct 22, 2007, 6:11 AM; 48 points)
  • The Miss­ing Math of Map-Making by johnswentworth (Aug 28, 2019, 9:18 PM; 40 points)
  • Prac­ti­cal debiasing by crazy88 (Nov 20, 2011, 9:45 AM; 40 points)
  • A Suggested Read­ing Order for Less Wrong [2011] by jimrandomh (Jul 8, 2011, 1:40 AM; 38 points)
  • Ap­plied Bayes’ The­o­rem: Read­ing People by Kaj_Sotala (Jun 30, 2010, 5:21 PM; 37 points)
  • Us­ing ma­chine learn­ing to pre­dict ro­man­tic com­pat­i­bil­ity: em­piri­cal results by JonahS (Dec 17, 2014, 2:54 AM; 37 points)
  • Train­ing Regime Day 2: Search­ing for bugs by Mark Xu (Feb 16, 2020, 5:16 PM; 32 points)
  • Ex­plor­ing the Idea Space Efficiently by Elithrion (Apr 8, 2012, 4:28 AM; 28 points)
  • aaronsw's comment on Re­ply to Holden on The Sin­gu­lar­ity Institute by lukeprog (Aug 4, 2012, 11:18 AM; 25 points)
  • En­gelbart: In­suffi­ciently Recursive by Eliezer Yudkowsky (Nov 26, 2008, 8:31 AM; 22 points)
  • Pre­dic­tions for 2021 (+ a tem­plate for yours) by epiphi (Jan 5, 2021, 3:35 AM; 17 points)
  • MinibearRex's comment on Men­tal Metadata by Alicorn (Mar 30, 2011, 3:42 PM; 17 points)
  • Throw a pre­dic­tion party with your EA/​ra­tio­nal­ity group by eukaryote (Dec 31, 2016, 11:02 PM; 14 points)
  • jsalvatier's comment on Bi­ases to watch out for while job hunt­ing? by malthrin (May 21, 2011, 10:05 PM; 14 points)
  • [SEQ RERUN] An­chor­ing and Adjustment by MinibearRex (Aug 21, 2011, 2:56 AM; 11 points)
  • Ra­tion­al­ity Com­pendium: Prin­ci­ple 2 - You are im­ple­mented on a hu­man brain by ScottL (Aug 29, 2015, 4:24 PM; 11 points)
  • Unnamed's comment on 2012 Sur­vey Results by Scott Alexander (Dec 9, 2012, 11:05 AM; 11 points)
  • Pro­posed al­gorithm to fight an­chor­ing bias by John_Maxwell (Aug 3, 2019, 4:07 AM; 10 points)
  • Sum­ma­riz­ing the Se­quences Proposal by MinibearRex (Aug 4, 2011, 9:15 PM; 9 points)
  • Ra­tion­al­ity Read­ing Group: Part I: See­ing with Fresh Eyes by Gram_Stone (Sep 9, 2015, 11:40 PM; 8 points)
  • Zubon's comment on The Affect Heuristic by Eliezer Yudkowsky (Nov 27, 2007, 9:42 PM; 8 points)
  • My Pre­dic­tions for 2018 (& a Tem­plate for Yours) by Zachary Jacobi (Jan 2, 2018, 1:19 AM; 8 points)
  • c.trout's comment on How dath ilan co­or­di­nates around solv­ing alignment by Thomas Kwa (Apr 14, 2022, 3:11 PM; 7 points)
  • Said Achmiz's comment on Un­der­wa­ter Tor­ture Cham­bers: The Hor­ror Of Fish Farming by omnizoid (Jul 26, 2023, 4:20 PM; 7 points)
  • [deleted]'s comment on Over­com­ing the Curse of Knowledge by JesseGalef (Oct 18, 2011, 6:25 PM; 6 points)
  • komponisto's comment on Amanda Knox: post mortem by gwern (Oct 23, 2011, 12:52 PM; 5 points)
  • Psy-Kosh's comment on Men­tal Metadata by Alicorn (Mar 30, 2011, 2:39 PM; 5 points)
  • Lyyce's comment on Should we ad­mit it when a per­son/​group is “bet­ter” than an­other per­son/​group? by Adam Zerner (Feb 16, 2016, 11:53 AM; 5 points)
  • Meetup : Moun­tain View se­quences discussion by RomeoStevens (May 18, 2012, 7:33 PM; 4 points)
  • michaelkeenan's comment on Open Thread: July 2010 by komponisto (Jul 6, 2010, 3:03 PM; 4 points)
  • Desrtopa's comment on Amanda Knox: post mortem by gwern (Oct 26, 2011, 4:09 PM; 4 points)
  • Meetup : Madi­son: Read­ing Group, See­ing with Fresh Eyes by fiddlemath (Sep 12, 2012, 2:56 AM; 4 points)
  • RobinZ's comment on Open Thread: July 2010, Part 2 by Alicorn (Jul 10, 2010, 11:57 PM; 2 points)
  • Matt_Simpson's comment on So You Think You’re a Bayesian? The Nat­u­ral Mode of Prob­a­bil­is­tic Reasoning by Matt_Simpson (Jul 15, 2010, 1:58 AM; 2 points)
  • Mark_Neznansky's comment on Epistemic vs. In­stru­men­tal Ra­tion­al­ity: Approximations by Peter_de_Blanc (Apr 28, 2009, 2:27 PM; 2 points)
  • What are the ax­ioms of ra­tio­nal­ity? by Yoav Ravid (Dec 25, 2018, 6:47 AM; 2 points)
  • thomblake's comment on For­ager Anthropology by WrongBot (Jul 30, 2010, 1:27 PM; 2 points)
  • Re­quest for rough draft re­view: Nav­i­gat­ing Identityspace by Will_Newsome (Sep 29, 2010, 5:51 PM; 1 point)
  • OnTheOtherHandle's comment on LessWrong could grow a lot, but we’re do­ing it wrong. by Epiphany (Aug 30, 2012, 5:13 AM; 1 point)
  • simon2's comment on Ghosts in the Machine by Eliezer Yudkowsky (Jun 18, 2008, 4:25 AM; 1 point)
  • Viliam_Bur's comment on Open thread, Septem­ber 16-22, 2013 by Metus (Sep 20, 2013, 8:03 AM; 1 point)
  • Desrtopa's comment on The elephant in the room, AMA by calcsam (Jun 1, 2011, 3:28 PM; 0 points)
  • JoshuaZ's comment on [LINK] Hu­mans are bad at sum­ming up a bunch of small numbers by Vladimir_Golovin (Oct 6, 2010, 2:40 PM; 0 points)
  • fortyeridania's comment on The Bias You Didn’t Expect by Psychohistorian (Oct 13, 2011, 3:05 PM; 0 points)
  • MarsColony_in10years's comment on Do We Believe Every­thing We’re Told? by Eliezer Yudkowsky (Mar 29, 2015, 2:46 AM; 0 points)
  • Mark_Neznansky's comment on Epistemic vs. In­stru­men­tal Ra­tion­al­ity: Approximations by Peter_de_Blanc (Apr 28, 2009, 2:26 PM; 0 points)
  • Mark_Neznansky's comment on Epistemic vs. In­stru­men­tal Ra­tion­al­ity: Approximations by Peter_de_Blanc (Apr 28, 2009, 2:26 PM; 0 points)
Eliezer YudkowskySep 7, 2007, 9:33 PM
84 points
22 comments2 min readLW linkArchive
PrimingAnchoringHeuristics & Biases
Post permalinkLink without commentsLink without top nav barsLink without comments or top nav bars
Part of the sequence:See­ing with Fresh EyesNext: Prim­ing and Contamination
  • Andrew2Sep 8, 2007, 12:02 AM
    6 points

    When I do this demo in class (see here for details or here for the brief version), I phrase it as “the percentage of countries in the United Nations that are in Africa.” This seems less ambiguous than Kahneman and Tversky’s phrasing (although, I admit, I haven’t done any experiment to check). It indeed works in the classroom setting, although with smaller effects than reported by Kahneman and Tversky (see page 89 of the linked article above).

    • Unnamed's comment on Is an­chor­ing a re­li­able cog­ni­tive bias? by justindomke (Oct 15, 2021, 3:49 AM; 3 points)
    • [deleted]Jun 26, 2011, 11:25 PM
      2 points
      Parent

      That book is indeed a great one and I have used many ideas from it in teaching an undergraduate probability class myself. I’m a grad student in applied math, so I may not see you in many of the same conferences, etc., so LW appears to be as good a place as any to say thanks. The Bayesian Data Analysis book is also quite good.

  • Brett_A._ThomasSep 8, 2007, 12:30 AM
    9 points

    By the way, I’m very tired, so this might just be my misreading, but I found the UN question to be ambiguous—“Do you think the percentage of African countries in the UN is above or below [65%]?” I read that as, “Of all the countries in Africa, what percentage of them are in the UN?”, not as what I believe to be the intended “Of all the countries that are in the UN, how many of them are African?” The answer to the former can quite obviously be guessed as “100% or darn close”, but the answer to the latter is less obvious.

  • Philip_HuntSep 8, 2007, 3:32 AM
    −1 points

    Brett: “”“By the way, I’m very tired, so this might just be my misreading, but I found the UN question to be ambiguous—“Do you think the percentage of African countries in the UN is above or below [65%]?” I read that as, “Of all the countries in Africa, what percentage of them are in the UN?”, not as what I believe to be the intended “Of all the countries that are in the UN, how many of them are African?” The answer to the former can quite obviously be guessed as “100% or darn close”, but the answer to the latter is less obvious.”””

    I don’t think it’s ambiguous at all. The question, as worded, clearly means “Of all the countries in Africa, what percentage of them are in the UN?”. And equaklly clearly, that’s not what the questioner intended.

  • Unnamed2Sep 8, 2007, 3:58 AM
    51 points
    0

    You’re a few years behind on this research, Eliezer.

    The point of the research program of Mussweiler and Strack is that anchoring effects can occur without any adjustment. “Selective Accessibility” is their alternative, adjustment-free process that can produce estimates that are too close to the anchor. The idea is that, when people are testing the anchor value, they bring to mind information that is consistent with the correct answer being close to the anchor value, since that information is especially relevant for answering the comparative question. Then when they are then asked for their own estimate, they rely on that biased set of information that is already accessible in their mind, which produces estimates that are biased towards the anchor.

    In 2001, Epley and Gilovich published their first of several papers designed to show that, while the Selective Accessibility process occurs and creates adjustment-free anchoring effects, there are also cases where people do adjust from an anchor value, just as Kahneman & Tversky claimed. The examples that they’ve used in their research are trivia questions like “What is the boiling point of water on Mount Everest?” where subjects will quickly think of a relevant, but wrong, number on their own, and they’ll adjust from there based on their knowledge of why the number is wrong. In this case, most subjects know that 212F is the boiling point of water at sea level, but water boils at lower temperatures at altitude, so they adjust downward. This anchoring & adjustment process also creates estimates that are biased towards the anchor, since people tend to stop adjusting too soon, once they’ve reached a plausible-seeming value.

    Gilovich and Epley have shown that subjects give estimates farther from the anchor (meaning that they are adjusting more) on these types of questions when they are given incentives for accuracy, warned about the biasing effect of anchors, high in Need For Cognition (the dispositional tendency to think things through a lot), or shaking their head (which makes them less willing to stop at a plausible-seeming value; head-nodding produces even less adjustment than baseline). None of these variables matter on the two-part questions with an experimenter provided anchor, like the Africa UN %, where selective accessibility seems to be the process creating anchoring effects. The relevance of these variables is the main evidence for their claim that adjustment occurs with one type of anchoring procedure but not the other.

    The one manipulation that has shown some promise at debiasing Selective Accessibility based anchoring effects is a version of the “consider the opposite” advice that Eliezer gives. Mussweiler, Strack & Pfeiffer (2000) argued that this strategy helps make a more representative set of information accessible in subjects’ minds, and they did find debiasing when they gave subjects targeted, question-specific instructions on what else to consider. But they did not try teaching subjects the general “consider the opposite” strategy and seeing if they could successfully apply it to the particular case on their own.

    Mussweiler and Gilovich both have all of their relevant papers available for free on their websites.


    Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396.

    Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26, 1142-1150.

    • Unnamed's comment on An An­chor­ing Ex­per­i­ment: Results by prase (Apr 4, 2011, 1:00 AM; 3 points)
    • Swimmer963 (Miranda Dixon-Luinenburg) Aug 28, 2012, 6:35 PM
      1 point
      Parent

      Gilovich and Epley have shown that subjects give estimates farther from the anchor (meaning that they are adjusting more) on these types of questions when they are given incentives for accuracy, warned about the biasing effect of anchors, high in Need For Cognition (the dispositional tendency to think things through a lot), or shaking their head (which makes them less willing to stop at a plausible-seeming value; head-nodding produces even less adjustment than baseline).

      Shaking their heads? If this is really an effective way to de-bias your thinking a tiny bit...COOL! I will try that!

      • tlhonmeyMay 12, 2022, 5:02 PM
        1 point
        Parent

        There have actually been several studies I’ve seen indicating that body-language is a feedback loop rather than just a communication output. Forcing yourself to smile will actually make you slightly happier, etc.

  • Eliezer YudkowskySep 8, 2007, 9:25 AM
    8 points

    Unnamed, thank you for correcting me.

  • Robin_Hanson2Sep 8, 2007, 11:25 AM
    6 points

    Unnamed, would you perhaps consider becoming a contributor here?

  • Henry_VSep 8, 2007, 1:14 PM
    4 points

    Can some of the anchoring effect can be explained by the use of a kind of implicit confidence interval?

    Suppose that I (subconsciously) have an estimate of 20% for the proportion of UN countries that are African. Further suppose that I think a 95% confidence interval ranges from 10% to 30%.

    If I start at a high anchor, I will adjust downwards until I’m within the 95% CI, i.e., 30%. If I start at a low anchor, I adjust upwards until I’m within the 95% CI, i.e., 10%. In my head, I may consider 10% and 30% as not statistically different from one another.

    I’m not talking about exact statistical inference, but I wonder if this process is part of what’s going on in the subject’s head.

    I have tried a classroom bargaining experiment, where I give random “valuations” to students. I then assign random ownership (so that half the class become sellers). Without knowing what the item is (it’s just “some good”), the initial offerers tend to have a disadvantage because they use their own valuations as anchors.

    When I change the setup by telling them that “it’s a used Toyota,” the final bargained prices tend to more closely (but not perfectly) split the surplus.

    I’m reminded of a story that my father tells about being in the army and learning to shoot. After missing the target, the instructor told them to use “bold sight adjustments” because shooters tend to be too timid in adjusting their aims. The phrase “bold sight adjustments” became part of our family vocabulary.

  • Gavin2Sep 10, 2007, 3:42 PM
    4 points

    I like to avoid looking at the prices of things that I want to buy, and instead ask myself “how much would I be willing to pay for this?” It’s my way of overcoming anchoring bias, and works pretty well.

  • TerraApr 12, 2008, 2:01 PM
    −1 points

    I wrote a post looking at the two numbers “1 x 2 x 3 x 4 x 5 x 6 x 7 x 8” made a median estimate of 512, “8 x 7 x 6 x 5 x 4 x 3 x 2 x 1″ made a median estimate of 2,250.

    From a computer science perspective.

    One of the interesting things I noticed was that both averaged guesses are close to powers of two and that given a little bit of fudging you can make a pretty good guess about how our brain creates that guess.

    IE

    2^8 + 2^1 = 512 and 2^8+ 2^3 = 2048

    (note that 2^3 is 8, but 2^1 is 2 instead of 1, so if you fudge all the numbers to their closest power of two and then do multiplication you get the answer they created.

    So for

    4 x 3 x 2 x 1 you would get 2^4 + 2^2 which is 64 and 1 x 2 x 3 x 4 you would get 2^2 + 2^1 which is 32

  • TerraApr 12, 2008, 2:05 PM
    0 points

    forgot the link (althouh you can click on the name that might not be obvious)

    http://​​www.functionalforums.com/​​TreeForum/​​index/​​Functional-Forums/​​Implementing-bias-in-AI

  • pookleblinkyJun 10, 2008, 6:12 PM
    1 point

    Let’s make the debiasing technique more rigorous.

    How much more unlikely is it that I will throw 15 consecutive snake-eyes, than that I will throw 11 consecutive snake eyes?

    I should allocate about −170 dB of belief to the likelihood of throwing 11 snake-eyes, and about −232 dB to the likelihood I will throw 15 snake-eyes. The ~60 dB difference indicates the latter event is 6 orders of magnitude more unlikely.

    What does it mean if someone thinks the difference is smaller?

    If 6 orders of magnitude of improbability are glossed over, that means the person does not comprehend it in gut terms.

    To what other event might I allocate −60 dB to? How about flipping a coin 20 times and getting all Heads?

    Now we’re getting somewhere. Let us ask ourselves a series of restricted Aumann Questions (on various statements in general knowledge) and calculate our joint belief. The difference between the belief we allocated, and the belief we ought to have allocated, is a measure of our flattened sense of improbability. We can take this into account, and adjust our anchors accordingly. We can, in effect, see how finely-tuned is our sense of improbability.

    i.e. Suppose I take a restricted Aumann test of 40 questions regarding various general facts. I assign a joint probability of −150 dB to the survey. If I were better calibrated, my priors ought to have increased this to −100. I now know I must be aware a possible 50 dB gap between my beliefs and reality, I ought to be wary of any parochial adjustment. How wary? I should attach very little confidence to any adjustment under one order of magnitude...

    • pnrjuliusApr 3, 2012, 1:18 AM
      0 points
      Parent

      I ran that one in my head and thought, “that’s got to be about a million times less likely.” And indeed it was, 6 orders of magnitude. To some extent, I may just have gotten lucky… but I think that lurking on Less Wrong for the last couple years may have made me appreciate probabilities at a more intuitive level.

      So does this mean Less Wrong actually works?

  • Neel_NandaApr 24, 2012, 6:14 AM
    2 points
    0

    But how would you slip an anchor in a normal conversation? Does it have to be phrased as a possible question or can it just be a random number they see or hear?

    • EricfJul 7, 2022, 9:23 PM
      1 point
      Parent

      It can just be a random number that is a number and not, say, a telephone dialing pattern or PIN. But it can’t be a number with relevant context.

      So if you’re selling a used car, mention big numbers without meaningful context like “they made 123,456 of this model year.” But if you mention the Milage, that has a “slot” I the buyer’s brain, and won’t be used as an anchor for the price.

  • mfbAug 4, 2012, 9:55 PM
    1 point

    What about many artificial anchors?

    Make a list with powers of 1.2 from 1 to 10. Look at it to estimate some absolute number, assuming you can somehow estimate the correct order of magnitude.
    In a similar way, for probabilities, make a list from 0 to 1 with a logarithmic scale of ratios in some interesting range.

    It does not help for the year Einstein first visited america, but I would really use anchors for that: 1933 as upper limit, 1880 as lower limit, and the remaining timespan would be guesswork for me.
    Looking at a biography, I think the answer is 1964+34-185+4*27 (to reduce the spoiler impact :p)

  • mszegedyApr 8, 2014, 6:37 AM
    2 points

    I’ve found that going by significant digits helps.

    “If I represented the date that Einstein came to the US with only one significant digit of precision, what would it be? Definitely 2000. What about two? Definitely 1900. What about three? Probably 1900 again; I’m willing to take that bet. But four digits of precision? I’m not sure at all. I’ll leave it as 1900.”

    The answer came out way off, but hopefully it prevented any anchoring, and it also accurately represents my knowledge of Einstein (namely, I know which properties of physics he discovered, and I know that he wrote his most important papers in the earlier half of the 190Xs, which must have also been when he came to the US). In hindsight, I might have should have taken historical context into account (why would Einstein leave for the US in the first place? if I had considered this, my guess would probably have ended up as 1910 or 1920), but that’s hindsight bias or a lesson to be learned.

    An improvement to this method might be that I explicitly consider the range of numbers that would make it come out as a significant digit (if the three-significant-digit number is 1900, then he came between 1895 and 1904; does that sound more plausible than him coming sometime between 1905 and 1914?). But this might just make the anchoring effect worse, or introduce some other bias.

  • DimitriKNov 13, 2014, 4:42 PM
    0 points

    On the question of Einstein I anchored, but I don’t see how else I could have done it. I don’t know much about his personal history but I get the sense Einstein had some contributions to the atom bomb, and had fled Europe to escape nazi prosecution. I anchored on 1945 as the end of WW2 and figured he must have left a fair bit sooner, possibly before the war as nazi persecution had already started before the war was underway.

    I guessed 1937. I can’t see how else I could have gone about it with the limited information I had. If I can’t google for the question I have to go with what’s a familiar piece of information and adjust from there.

    I looked it up after and he was visiting the us in ’33 and decided to not go back to Germany when hitler came to power. I wasnt correct but anchoring let me make a reasonably good guess when I was dealing with a lack of information

  • LucentNov 26, 2019, 1:39 AM
    2 points

    Is there general agreement that anchoring experiments are a subversion of an evolutionary trait that is generally beneficial? It’s rare to be in a group, be presented with a “random” number, and then be asked a question whose answer will be an unrelated number. Unless you have a lot of group status, it’s much less harmful to your standing to be wrong with many others frequently than it is beneficial to be right alone infrequently. It’s only recent in our evolutionary history that the balance has tipped in the other direction.

  • tmercerJul 7, 2022, 9:10 PM
    1 point

    I guessed 55,000 for the fast multiplication after this 65 anchor. I think the percentage of UN countries in Africa is <65%.

Back to top

Customize appearance

Current theme: default

Less Wrong (text)

Less Wrong (link)

Hi, I’m Bobby the Basilisk! Click on the minimize button () to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember, the changes won’t be saved until you click “OK”!)

Theme tweaker help

  