We Change Our Minds Less Often Than We Think
Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another. The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%.
—Dale Griffin and Amos Tversky1
When I first read the words above—on August 1st, 2003, at around 3 o’clock in the afternoon—it changed the way I thought. I realized that once I could guess what my answer would be—once I could assign a higher probability to deciding one way than other—then I had, in all probability, already decided. We change our minds less often than we think. And most of the time we become able to guess what our answer will be within half a second of hearing the question.
How swiftly that unnoticed moment passes, when we can’t yet guess what our answer will be; the tiny window of opportunity for intelligence to act. In questions of choice, as in questions of fact.
The principle of the bottom line is that only the actual causes of your beliefs determine your effectiveness as a rationalist. Once your belief is fixed, no amount of argument will alter the truth-value; once your decision is fixed, no amount of argument will alter the consequences.
You might think that you could arrive at a belief, or a decision, by non-rational means, and then try to justify it, and if you found you couldn’t justify it, reject it.
But we change our minds less often—much less often—than we think.
I’m sure that you can think of at least one occasion in your life when you’ve changed your mind. We all can. How about all the occasions in your life when you didn’t change your mind? Are they as available, in your heuristic estimate of your competence?
Between hindsight bias, fake causality, positive bias, anchoring/priming, et cetera, et cetera, and above all the dreaded confirmation bias, once an idea gets into your head, it’s probably going to stay there.
1Dale Griffin and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology 24, no. 3 (1992): 411–435.
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- Crisis of Faith by 10 Oct 2008 22:08 UTC; 177 points) (
- “Rationalist Discourse” Is Like “Physicist Motors” by 26 Feb 2023 5:58 UTC; 136 points) (
- The Tragedy of Group Selectionism by 7 Nov 2007 7:47 UTC; 121 points) (
- Teenage Rationalists and Changing Your Mind by 5 Aug 2011 18:19 UTC; 88 points) (
- Anthropomorphic Optimism by 4 Aug 2008 20:17 UTC; 81 points) (
- Unknown knowns: Why did you choose to be monogamous? by 26 Jun 2010 2:50 UTC; 69 points) (
- LA-602 vs. RHIC Review by 19 Jun 2008 10:00 UTC; 62 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- Causality and Moral Responsibility by 13 Jun 2008 8:34 UTC; 55 points) (
- Harder Choices Matter Less by 29 Aug 2008 2:02 UTC; 54 points) (
- Many Worlds, One Best Guess by 11 May 2008 8:32 UTC; 51 points) (
- Rational Repentance by 14 Jan 2011 9:37 UTC; 45 points) (
- The Missing Math of Map-Making by 28 Aug 2019 21:18 UTC; 40 points) (
- Welcome to Less Wrong! (5th thread, March 2013) by 1 Apr 2013 16:19 UTC; 37 points) (
- Welcome to Less Wrong! (2012) by 26 Dec 2011 22:57 UTC; 31 points) (
- Welcome to Less Wrong! (July 2012) by 18 Jul 2012 17:24 UTC; 31 points) (
- Welcome to Less Wrong! (6th thread, July 2013) by 26 Jul 2013 2:35 UTC; 30 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- 14 Jul 2019 19:04 UTC; 22 points) 's comment on Benito’s Shortform Feed by (
- Contaminated by Optimism by 6 Aug 2008 0:26 UTC; 22 points) (
- Welcome to Less Wrong! (7th thread, December 2014) by 15 Dec 2014 2:57 UTC; 21 points) (
- Do We Change Our Minds Less Often Than We Think? by 19 Aug 2019 21:37 UTC; 20 points) (
- Welcome to Less Wrong! (8th thread, July 2015) by 22 Jul 2015 16:49 UTC; 19 points) (
- The Dualist Predict-O-Matic ($100 prize) by 17 Oct 2019 6:45 UTC; 19 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Don’t be a Maxi by 31 Jul 2022 23:59 UTC; 15 points) (
- Welcome to LessWrong (10th Thread, January 2017) (Thread A) by 7 Jan 2017 5:43 UTC; 15 points) (
- Update Then Forget by 17 Jan 2013 18:36 UTC; 15 points) (
- Welcome to LessWrong (January 2016) by 13 Jan 2016 21:34 UTC; 11 points) (
- Rationality Compendium: Principle 2 - You are implemented on a human brain by 29 Aug 2015 16:24 UTC; 11 points) (
- 8 Aug 2012 22:28 UTC; 9 points) 's comment on Friendly AI and the limits of computational epistemology by (
- Welcome to Less Wrong! (9th thread, May 2016) by 17 May 2016 8:26 UTC; 9 points) (
- 2 Mar 2010 1:43 UTC; 8 points) 's comment on Open Thread: March 2010 by (
- Rationality Reading Group: Part I: Seeing with Fresh Eyes by 9 Sep 2015 23:40 UTC; 8 points) (
- [SEQ RERUN] We Change Our Minds Less Often Than We Think by 16 Sep 2011 3:39 UTC; 8 points) (
- 14 Dec 2011 18:31 UTC; 5 points) 's comment on [POLL] LessWrong census, mindkilling edition [closed, now with results] by (
- 9 Apr 2009 3:46 UTC; 5 points) 's comment on Extreme Rationality: It’s Not That Great by (
- 3 Feb 2010 5:19 UTC; 4 points) 's comment on Open Thread: February 2010 by (
- Meetup : Madison: Reading Group, Seeing with Fresh Eyes by 12 Sep 2012 2:56 UTC; 4 points) (
- 12 Apr 2011 2:45 UTC; 3 points) 's comment on Just Try It: Quantity Trumps Quality by (
- Welcome to Less Wrong! (11th thread, January 2017) (Thread B) by 16 Jan 2017 22:25 UTC; 2 points) (
- 6 Aug 2013 16:49 UTC; 2 points) 's comment on Making Rationality General-Interest by (
- 3 Sep 2009 23:25 UTC; 2 points) 's comment on Rationality Quotes—September 2009 by (
- Meetup : West LA Meetup 08-23-2011 by 19 Aug 2011 17:45 UTC; 1 point) (
- 5 Aug 2011 14:25 UTC; 1 point) 's comment on Experiment: Knox case debate with Rolf Nelson by (
- 28 Nov 2012 6:40 UTC; 1 point) 's comment on Giving What We Can, 80,000 Hours, and Meta-Charity by (
- 2 Nov 2011 10:27 UTC; 1 point) 's comment on Less Wrong link exchange by (
- If you know you are unlikely to change your mind, should you be lazier when researching? by 21 Aug 2022 17:16 UTC; 1 point) (
- 31 Aug 2011 2:02 UTC; 0 points) 's comment on Newcomb’s Problem and Regret of Rationality by (
- 3 Jan 2011 15:55 UTC; 0 points) 's comment on Working hurts less than procrastinating, we fear the twinge of starting by (
- 3 Jan 2012 16:48 UTC; 0 points) 's comment on New Year’s Prediction Thread (2012) by (
- 3 Oct 2017 19:23 UTC; -9 points) 's comment on Slack by (
I hate changing my mind based on my parents’ advice because I want to demonstrate that I’m capable of making good decisions on my own, especially since we seem to disagree on some fundamental values. Specifically, they love their jobs and put a moral value on productivity, while my goal in life is to “work” as little as possible and have as much “fun” as possible.
Eliezer never said to change your mind based on wrong advice! However, if you feel as if you should be following your parents’ advice, perhaps you should question exactly how capable you really are (at the moment).
Does this mean that if we cannot remember ever changing our minds, our minds are very good at removing clutter?
Or, consider a question that you’ve not made up your mind on: Does this mean that you’re most likely to never make up your mind?
And, anyway, in light of those earlier posts concerning how well people estimate numeric probabilities, should it be any wonder that 66% = 96%?
Don’t they normally make them more certain? Like, if they’re 96% sure, there’s a 66% chance that they’re right, rather than the other way around?
Not to argue, but to point out, that this is not necessarily a bad thing. It depends entirely on the basis of one’s conclusion. Gut instincts are quite often correct about things we have no conscious evidence for—because our unconscious does have pretty good evidence filters. Which is one of the reasons I suggested rationalization is not necessarily a bad thing, as it can be used to construct a possible rational basis for conceptualizations developed without conscious thought, thus permitting us to judge the merit of those ideas.
Here is one way to change your mind. Think through something carefully, relying on strong connections. You may at some point walk right into a conclusion that contradicts a previous opinion. At this point something will give. The strength of this method is that it is strengthened by the very attachment to your ideas that it undermines. The more stubborn you are, the harder you push against your own stubbornness.
I agree with Adirian that not changing our minds is not necessarily a bad thing.
The problem, I guess, like with most things is we can’t be sure which way to go. Gut feelings are often quite correct. But how do we know when we are having a bias which is not good for us and when it’s a gut feeling? Gut feelings inherently aren’t questionable. Biases need to be kept in check.
If we run through the standard biases and logical fallacies like a checklist and what we think doesn’t fall in any of them, we can go with our gut instinct. Else, give whatever we have in mind a second thought. What we do may not be foolproof but it at least takes us in a direction which would makes changing our minds, when required, a less painful process.
It probably doesn’t help to live in a society where changing one’s positions in response to evidence is considered “waffling”, and is considered to show a lack of conviction.
Divorce is a lot more common than 4%, so people do admit mistakes when given enough evidence.
I think the embargo on mind-changing is a special case for politiicians: after all, if they say one thing on the hustings, and then do another in office, that makes a mockery of democracy. However, if it is applied to non-pliticians, that would be fallacious.
If they say one thing and intend to do another, sure—but if they actually update? That may be bad PR, but I don’t think it’s undemocratic.
If you can;t rely on politicians to do something like what hey said they were going to, what’s the point in voting? ideally, a pl who has a change of heart should stand for re-election.
You could have a prediction about what they respectively will do and have a preference over those outcomes.
So if they ruin the economy, and I successfully predict that, I smile and collect my winnings?
Both candidates being likely to successfully manage to ruin the economy is a problem quite distinct from politicians lying.
Presumably if you can predict that Candidate A will ruin the economy, then you vote for Candidate B instead.
Unless you can think of a way of winning by having advance knowledge that the economy will be ruined, which will net you greater gain than having an un-ruined economy would be. Then you may selfishly vote for Candidate A.
I’m ignoring here the question of how much your opinion influences the outcome of the election, of course. Also if you end up predicting that all the candidates will ruin the economy equally, you don’t have much of a decision to make.
I can only predict what will happen on the basis that a) their policies will have a certain effect and b) they will actually implement their policies. Which gets back to the original point: if they are not going to do what they say, what is the point of voting?
That seems to be a significant limitation.
Fortunately, not everybody has said limitation.
I think I agree. I also think wedrifid wanted to talk about predictions of what the candidates do, even if they are not guaranteed not to change their mind.
This doesn’t seem impossible, just harder. You’d have to make a guess as to how likely the candidates are to implement a different policy from the one they promised, as well as the effect the possible policies will have.
The candidates do have an incentive to signal that they are unlikely to “waffle”. If you are relatively certain to implement your policies, then at least those who agree with you will predict that you’ll have a good effect. If you look like you might change your mind, even your supporters might decide to take a different option, because who knows what you will do?
In theory, you might gain a bigger advantage by somehow signaling that you will change your mind for good reasons. Then if new information comes up in the future, you’re a better choice than anyone who promises not to change their mind at all. But this is trickier and less convincing.
You misrepresent democracy very badly in the above post. Politicians are not agents of the voters, they are representatives of them, appointed by, and accountable to the demos, but not a mirror of it- they are not supposed to enact the policies voters thought appropriate 2 years ago at the polls, or what polls well today. They are supposed to do what the voters would want done if they had time to research the issue and give it some thought, incorporating all data about the present situation. If policy was supposed to reflect the averaged will of the people politicians would be entirely redundant and we could just do lawmaking by popular initiative.
Of course it is unworkable for politicians to stick rigidly to their manifestos. It is also unworkable for them to discard their manifestos on day one.
On the other hand, of the people I know who have gotten divorced, refusal to admit mistakes seems to be one of the leading causes...
Changing your mind or “updating” is not necessarily a sign of rationality. You could also update for wrong reasons.
For example, a divorce can happen when a person has unrealistic expectations on marriage. Updating their beliefs about their partner would be just a side effect of refusing to update their beliefs about marriage.
Also, in some cases, the divorce could have been planned since the beginning (for example for financial gain), so it actually did not include a change of mind.
I wonder if the act of answering the question actually causes the decision to firm up. Kind of the OvercomingBias Uncertainty Principle.
It is nice to have a clear example of where people are consistently underconfident. Are there others? Michael, good point about divorce.
In my experience teenagers are often underconfident about their parents decision making ability… and indeed overconfident about their own decision making ability.
Many women seem to be underconfident about their driving skills—the consequence of this is that they have less accidents! Though maybe, being a male, I’m overconfident and hence judge their appropriate confidence as underconfidence.
http://ngm.nationalgeographic.com/print/2011/10/teenage-brains/dobbs-text (emphasis added)http://online.wsj.com/article/SB10001424052970203806504577181351486558984.html
EDIT: “What’s Wrong With the Teenage Mind?”, WSJ:
EDITEDIT: http://pss.sagepub.com/content/19/7/650.short
An interesting article gwern thanks.
It provides a rationale in support of my statement that teenagers are often overconfident in their decision making ability. The article argues that teenagers reward perception is higher and hence the risk seems reasonable compared to the “high” reward; that is one form of overconfidence.
Certainly is interesting to see that adults aren’t affected by having a peer present during the experiment, whereas teenagers are more likely to engage in risk taking, perhaps better called—reward seeking—when their peer is watching during the experiment.
But I posted that to say exactly the opposite.
When deciding whether to take a risk, the risk and reward are the two most important factors. It’s easy to criticize someone for failing to accurately judge the risk: you simply collect highly accurate statistics on the risk, and compare with what the person says when you ask. But as the article says, when we do this, the teenagers are not discounting risks, but exaggerating them.
So that leaves only the reward. How do you prove someone is wrong about how much they will enjoy the reward for taking the risk? It seems rather hard to do, and you definitely haven’t done it… The peer thing doesn’t prove anything about them being wrong: gaining a reputation for bravery or risk-taking or gratifying your peers and other social factors are present simultaneously. (If you have ever shown off for friends, or watched someone show off, you will understand the rewards are substantial.)
The teenager in the article didn’t exaggerate the risks when driving at 113 mph, he didn’t even consider the risk of getting caught. Is a trip to court, lawyers fee, several fines and risking death worth the thrill of driving 113 mph? You tell me.
One might also consider the reality of self serving bias. The teenager paints himself in the best kind of light with regards to safety. He gets caught doing 113 mph and is peeved he’s been charged with “reckless driving”, NO he says, I wasn’t reckless I wasn’t just gunning it, I was driving, the road is dry and straight, it was daytime—all these comments of his are designed to make him sound as if he isn’t reckless. Yet the expert, the police officer charges him with reckless driving. Does the police officer have it wrong? Is driving 113mph on a public road reckless? The article does support my observation about teenagers. That particular teenager is overconfident in his ability to decide what is reckless.
So you say.
You exaggerate. That’s only if you are caught and worst-case scenarios if you are caught to boot. Is it worth it? Ask any skydiver; I’ve gone skydiving, and it is amazing. And I’m not even a teenager any more.
(This sounds like the usual generalizing problem: “I don’t think that sounds insanely fun and awesome, so obviously no teenager can find it that rewarding and by the previous logic, these teenagers must be making extremely biased assessments of risk; they should stop that. Also, these teens should just stop laying in bed all morning and staying up all night.” You are a respectable sober adult, I should not be surprised to learn.)
Yes, I’m sure the scientists conducting these risk-assessment surveys are so moronic that they only asked teenagers immediately after taking a risk and getting burned, and never even once thought about cognitive dissonance or other such issues.
“So you say.” Nope, so the author of the article reveals by relaying what the driver said (and implied) about risks. Further it’s obvious the teenage driver didn’t drive the route before his speed run, or he’d have likely seen the police officer who busted him and not have done the speed run at that time, that probable lack of a pre-drive increased his risk of accident and made certain he got busted that particular time.
“You exaggerate. That’s only if you are caught and worst-case scenarios if you are caught to boot. Is it worth it? Ask any skydiver; I’ve gone skydiving, and it is amazing. And I’m not even a teenager any more. (This sounds like the usual generalizing problem; “I don’t think that sounds insanely fun and awesome, so obviously no teenager can find it that rewarding. Also, these teens should just stop laying in bed all morning and staying up all night.” You are a respectable sober adult, I should not be surprised to learn.)”
First it wasn’t an exaggeration it was the facts as revealed by the article you posted! Second, the worst case scenario I can think of isn’t killing himself, it’s driving at 113 mph into a school bus and killing and maiming 40 children, then surviving and being paralysed from the neck down for the remainder of his life which is spent in a prison hospital. Third, one third of american teenage deaths are in motor vehicles. http://www.cdc.gov/nchs/data/databriefs/db37.htm
With regards to your skydiving example, how about you be a good chap and link the appropriate lesswrong description for ludicrously weak analogy.
As for your inclination to dismiss me due to your (somewhat inaccurate) stereotyping, like seriously mate, please put a sock in it I came to lesswrong hoping to get away from that kind of immature nonsense.
I don’t believe there is one. Also, it’s on the strong side as analogies go—it’s a risky behavior that is a lot of fun, specifically from going very fast. What would be a better analogy, going fast in speedboats?
Maybe I’m misreading, but it looks to me like a little over 35%. That said, I don’t see how it’s relevant. If one teenager died every 20 years and 1⁄3 of them were in motor vehicles, would that imply anything? How does the 1⁄3 of deaths relate to anything about proper analysis of risks, and is anything similar implied by the 13% that die from homicide? Should teens stop going to places where there are other humans, even though it’s enjoyable, because someone there might kill them?
So your interpretation of the anecdote as presented by the author overrides the stated summary of the surveys by an involved academic?
Your precautions do not eliminate the risk (do police officers not move?), and further, they are non sequiturs: listing possible precautions do not prove or disprove anything about teenegers’ risk perceptions and receiver rewards, neither their elevation or reduction.
What Thom said. Sumner has a good maxim, ‘never reason from a price change’, that applies here as well. Prices have at least two factors, demand and supply, which interact to give the price—but two factors means you can’t reason backwards from the price (or its change) to infer how or whether either the demand or supply changed. We are dealing with an equation with even more variables than a simple supply-demand graph, of which the death-rate is only one and already addressed by the risk underestimation. It is not very useful to learn of a rate with no context or information on what the best rate is. (Another economist said something to the effect that, I don’t know what the best number of falling buildings in an earthquake is but it probably is non-zero. Similar observations are true of risk-taking in general.)
Your failure to deal at all seriously with the idea (that teens do derive large amounts of utility from the risky activities and this justifies them) isn’t very appropriate for LW. I did not stereotype, I drew the logical conclusion from an age-related neurobiological change combined with a lack of empathy that the community has frequently noticed, and I did so in a deliberately non-insulting way.
(Had I intended to be immature, I would have gone with something like ‘coward’ or ‘age-dulled senses’ as descriptions of older non-teens’ reduced enjoyment of the risky behavior under discussion.)
Gwern wrote “(This sounds like the usual generalizing problem: “I don’t think that sounds insanely fun and awesome, so obviously no teenager can find it that rewarding and by the previous logic, these teenagers must be making extremely biased assessments of risk; they should stop that. Also, these teens should just stop laying in bed all morning and staying up all night.” You are a respectable sober adult, I should not be surprised to learn.)”
Gwern, you stated the above ad hominem, which I find insulting, regardless of whether you meant it as such. You implied that I was thinking such things—both the words AND the method aren’t conducive to civil discussion, hence I responded with less civility than I would have preferred. You attacked my character as a means of dismissing my discussion, it’s a low tactic and one I didn’t expect from LW, it is indeed the typical internet trash talk I mentioned.
My interpretation of the anecdote reveals that the anecdote doesn’t necessarily support your argument, whilst the anecdote also doesn’t necessarily support the remainder of the article. It’s quite clear that the driver didn’t overestimate the risks for he was busted for reckless driving. The charges of reckless driving were made by an expert, on the scene—not by either you, me or anyone else “viewing” the incident in text some time later. I maintain that the police officer is a better judge of the event than either the driver or you or I, hence it is more likely that it was in fact reckless driving than it wasn’t reckless driving. Further since there is no mention of a passenger in the car, and the tone of the article leads me to believe that the author would have mentioned a passenger since the driver would have been risking someone else’s life—something of note—we can minimize the notion of peer induced reward as no one was watching. I say minimize, not discount, for no doubt the driver will tell his peers and perhaps gain some status in the storytelling.
With regards to my “failure to deal at all seriously...”, I feel I deal with the issue of teenage overconfidence quite seriously, for I acknowledge that teenagers do undertake risky behaviour, and that (studies mentioned in the article) show that they perceive the rewards outweigh the risk. It is quite clear that in perceiving their rewards with such a high (subjective/personal) value, many of them have indeed made an error for one third teenager deaths are in car accidents. One should consider if death, both the risk of it and the actual occurrence of it is truly a fair price to pay for driving fast (or under the influence of alcohol).
With regards to the intention to be immature, I have no knowledge of your intentions—only the observation that attacking my character without addressing the substance of my argument is an immature act.
Gwern wrote “Your precautions do not eliminate the risk (do police officers not move?), and further, they are non sequiturs: listing possible precautions do not prove or disprove anything about teenegers’ risk perceptions and receiver rewards, neither their elevation or reduction.” The precautions aren’t necessarily designed to eliminate the risk, though they may do so, they will however mitigate various risks, including chance of getting caught. I assume the teenager wanted to both drive the speed run and not receive a fine for doing so. That the precaution of a pre-drive of the route didn’t occur supports my contention that the teenager did not overestimate the risks, in fact underestimated that risk for he was caught. With regards to rebuttal that a police officer could move, I think it’s reasonable to conclude that if during a pre-drive the police officer is observed in the location it’s too risky a time for the speed run, whilst if the police officer isn’t observed then one has done some work in minimizing the risk of being caught.
We might consider a pre-drive of the route as an expense which made the reward vs investment unfavourable, this would reveal that the perceived reward is not so high as to overcome some (amount) of minutes of the teenagers time. Something to consider, I’d appreciate your input on that line of reasoning.
David Dobbs presents an argument that teenagers perception of reward enables their risky behaviour. I believe that argument is congruent with my original statement “In my experience teenagers [are] indeed overconfident about their own decision making ability.” Perhaps I should have said, In my experience some teenagers are… to be more appropriately pedantic.
To quote, use a greater-than sign at the beginning of the line. For more formatting help, click “show help” below the comment box.
Thanks thomblake, I’ll test that just now.
If you still think that...
You wish to defer to the cop’s expertise on whether it breaks the law? Excellent! I wish to defer to teens’ expertise on what they enjoy. I’m glad we could come to agreement that teens overestimate risk but enjoy risky behavior much more than older people.
‘fairness’ does not enter into it. As a transhumanist, I do not think death is a fair price for much of anything.
That aside, you repeat your 1⁄3 number as if it means anything in the absence of other information, as explained already. It does not.
This is so far your only point worth a damn. I suggest you continue this line of reasoning, sans the fucking anecdotes.
Anecdotes deserve expletives these days? Those must be some dastardly anecdotes.
They derailed a whole thread into a giant clusterfuck of general nonsense. Is that sufficiently dastardly?
Don’t know. The wall-of-text nonsense had already turned me off! I didn’t get as far as reading anecdotes.
You and I, we finally agree on something. :)
I honestly didn’t know we usually disagreed. Probably wouldn’t make it on a top ten list of “Most Likely To Disagree With Wedrifid”.
And I say that paper-machine would make it on a top ten list of “Most Likely To Disagree With Wedrifid”—so there!
I suppose you could run a poll.
Gwern wrote “You wish to defer to the cop’s expertise on whether it breaks the law? Excellent! I wish to defer to teens’ expertise on what they enjoy. I’m glad we could come to agreement that teens overestimate risk but enjoy risky behavior much more than older people.”
I’ve been comfortable all along with accepting what teens find enjoyable, I do not agree that teens overestimate risk and frankly I’m surprised you could glean that from what I’ve written. Let me be clear, the article reveals that teenagers underestimate risk.
Dobbs wrote—“It was the brain scans she took while people took the test. Compared with adults, teens tended to make less use of brain regions that monitor performance, spot errors, plan, and stay focused—areas the adults seemed to bring online automatically. This let the adults use a variety of brain resources and better resist temptation, while the teens used those areas less often and more readily gave in to the impulse to look at the flickering light—just as they’re more likely to look away from the road to read a text message.”
Estimating risk is about planning, monitoring performance, spotting errors and staying focused, whilst the final comment quoted above provides a suitable anecdote highlighting another situation where teens are more likely to underestimate risk. Further teens more readily give in to impulse—that reveals that teens readily don’t estimate risk (at all), do you see that? Estimating risk is in opposition to giving into impulse, an impulse is a sudden urge—estimation isn’t something that’s done suddenly, estimation is calculated by examining the context.
Gwern continues with “‘fairness’ does not enter into it. As a transhumanist, I do not think death is a fair price for much of anything.”
I used the term “fair” in the context of the gain outweighs the cost, it’s a colloquialism you obviously understand since you use it yourself, hence your comment “fairness’ does not enter into it” is false. However, you’re quite right that death isn’t a fair price for much of anything, providing support for the risk of death not being much of a fair price for anything either… again you just keep destroying your own argument.
So on one hand we’ve got teens who are shown in your quoted article to anecdotally underestimate risk and also shown in research to utilise less brain processes that estimate risk and on the other hand its been shown in other research presented in the same article that teens place a higher value on rewards—all of that leads to the inescapable conclusion that some teens are overconfident in their decision making ability.
It’s become apparent to me that this discussion is an example of the disconfirmation bias. Perhaps you’d care to follow the procedure for minimizing/removing disconfirmation bias before you make another post, I have already done so several times.
Since gwern is, well beyond what I thought was typical of him, refusing to call a horse a horse, I’m going to say it: man, you’re so lame.
First understand that I noticed a disagreement going on between someone I’ve never seen before, a Mr. Peacewise, and a Mr. Gwern, whom I despise ever so lightly. He’s a jerk on IRC, you see. It would have made me feel better for him to be wrong and you to be right, you see. I wanted that, in my gut (though not by Tarski).
But man. Gwern posts an article with a perfectly reasonable conclusion attached, and you take a slice of anecdotal evidence to say just the opposite, which just happens to precisely match your preconceptions, and then in the ensuing discussion, instead of recognizing this, you accuse gwern first of being insulting and then of being labored by cognitive biases, meanwhile with no evidence, literally none, that you are right and he is wrong.
LAME.
I’m just going to point out that I’m surprised that this comment of mine has not only been voted up, but voted up quite strongly, considering it rests in a nest of ‘hidden’ comments. I fully expected this comment to rest somewhere in the neighborhood of −2 karma.
I’m not sure whether to call this a pleasant surprise. I’m really just confused.
I recall being surprised too, back when I saw your comment (then at +6). Usually I expect that sort of comment to be negative.
Mind you I upvoted it myself because you nailed it. You were a little bit of a jerk in the comment but I thought it was entirely appropriate to the circumstance—and balanced by at least agreeing with Gwern in the current battle.
Hey thanks Grognor, I’ll take your ad hominems about both Gwern and myself with the lack of respect they deserve.
With regards to the article, neither the research in it, nor the anecdotal evidence in it support the counter claim that teenagers are not overconfident.
The article does provide a rationale for why teenagers are overconfident, all I’ve done is unpack that information, first using the articles anecdote, then using the articles described research. meh. One can lead a horse to water but one can’t make it drink.
Just politely, I don’t think this style is going to have good results for you. A more robust approach would be: when something does not deserve your respect, ignore it.
You are indeed correct shokwave, thanks.
Those are insults, not ad hominem fallacies. It is a social violation and not a logical one.
This is patently false.
Do you recall this line in the Matrix?
Thats what i hoped would be understood by the previous, one can lead a horse to water but one cant make it drink.
I feel like “technically false” would be more accurate. If it’s just you, the horse, and a puddle, it’s surely going to be at least difficult to convince it to start slurping it up if it doesn’t want to.
“You can lead a horse to water but you can’t make him drink if you aren’t very imaginative and your resources are artificially limited”.
If they’re sufficiently limited, you can’t even lead a horse to water.
There is a sense of “drink” which encompasses raising a glass to your mouth and ingesting the liquid in it, or in the case of horses, lowering their head to the water, taking some into the mouth and swallowing it.
Sticking a tube down a horse’s throat certainly achieves something, but not precisely this.
Presumably your goal is a hydrated horse, however.
Another example of the danger of explicit goal maximizers.
You could alo be trying to drug the horse or provide him with nutritional supplementation in liquid form.
I know you’re having a bit of a laugh, however force feeding is not drinking.
As the wiki you link quite clearly shows, force feeding is having the tube passed through the nose or mouth into the stomach. Whilst drinking, in context of a horse, doesn’t include a tube, and does include the liquid going into the mouth and being swallowed.
This is just missing the point. The mind is what the brain does. If a teenager chooses the higher reward and glances away, then by definition the areas involved in inhibition etc aren’t going to be as busy! If they were equally busy in both the glancers and non-glancers, no one would be discussing them in the first place!
Unfortunately, we pay with death for both action and inaction. Destroying my own argument indeed. If you seriously mean that, then you must mean that no risk should ever be taken, which is not a position many will sympathize with.
Nothing inescapable about it. What we have is a worthless anecdote you insist on supporting your position, extremely strong evidence against underestimation of risk, brain-imaging results you do not understand, none of which forces the conclusion of overconfidence as opposed to teens intrinsically having higher rewards just as they intrinsically oversleep and all the over changes that go with puberty and being young adults.
Or, as it is more commonly known, the confirmation bias.
Thanks gwern for returning to the discussion, cheers.
Perhaps I am missing the point. I reason that one point is that if the areas involved in inhibition aren’t busy, then the person isn’t estimating risk, in context, they are instead being overconfident in their decision making, they’ve not judged the situation, instead acting in impulse.
Well no actually, I do seriously believe that you’ve destroyed your own argument and I don’t mean that no risk should be ever taken, instead I mean that when the risk is death of oneself or others then the risk is so high as to outweigh most, if not all rewards.
The evidence isn’t extremely strong against underestimation of risk, Dobbs wrote about growing from teenager to adult… “When this development proceeds normally, we get better at balancing impulse, desire, goals, self-interest, rules, ethics, and even altruism, generating behavior that is more complex and, sometimes at least, more sensible. But at times, and especially at first, the brain does this work clumsily. It’s hard to get all those new cogs to mesh.”
If one gets better at balancing those things during development, ie. growing up, that reveals one has a lack in balancing those things, which are to do with judgement—and poor judgement is one form of overconfidence. http://en.wikipedia.org/wiki/Overconfidence_effect
Dobbs goes on to write, “This let the adults use a variety of brain resources and better resist temptation, while the teens used those areas less often and more readily gave in to the impulse to look at the flickering light...” this reveals that teens give in to impulse, which is about lacking judgement which goes to them being overconfident.
Dobbs continues “If offered an extra reward, however, teens showed they could push those executive regions to work harder, improving their scores.”
This draws out a reason why they underestimate risk—because if the reward isn’t “extra” or perceived as higher to the teen, then they likely won’t push those executive regions to work harder, as is revealed in the articles previous paragraph, which I haven’t quoted.
Dobbs continues “Add stakes that the teen cares about, however, and the situation changes. In this case Steinberg added friends: When he brought a teen’s friends into the room to watch, the teen would take twice as many risks, trying to gun it through lights he’d stopped for before. The adults, meanwhile, drove no differently with a friend watching.”
This can actually be interpreted either way, Steinberg chooses a higher reward, whilst one can just as reasonably choose an underestimation of risk – a rationale for which is alluded to in article is that social feelings/thoughts are more sensitive for the teenagers, hence the brain is more focussed on social cognition than risk estimation (as the risk estimation processes require more effort), hence less brain cycles on risk estimation – hence risk underestimation.
If I may use a real world scenario based upon the research and one that does happen, i.e. not fictional. When a teenagers friends are brought into their car, the teen would take twice (or some other >1 multiplier) as many risks, trying to gun it through lights he’d stopped at before. (rephrasing dobbs, reasonably I believe) It’s known that some teenage passengers and drivers encourage or engage in risky driving for the thrill, that includes the social aspect of a shared thrill—in this situation the risk has also multiplied for now the death of passengers should be taken into account, whilst the reward has also increased. Might be zero sum, don’t know, more research needed, but it’s clear that both reward and risk have increased due to the presence of others.
Now I hope that all you guys can see, this isn’t a case of me being some tired old man who doesn’t understand the joys of risk taking and thrill seeking. One should not forget that whilst I may be a somewhat respectable sober adult, I have already been through the teenage years under discussion and hence am able to recall those feelings that have been presented as if I am incapable of relating to. On the other hand a thrill seeking teenager is likely to be less aware of what it is to be a respectable sober adult, having never been one. In short I’ve been both a thrill seeking teenager, a thrill seeking adult and also a respectable sober adult. I do see both sides of this discussion.
That’s exactly what I’d expect a respectable sober adult to say.
Then you have the fortunate ability to accurately predict accurate statements.
And how does one judge when the reward is outweighed? Hm, that wouldn’t be subjective would it...?
Never reason from a price change. Are teenagers overconfident about how much daytime sleep they need, and their circadian rhythm shifts due to them getting “better at balancing impulse, desire, goals, self-interest, rules, ethics, and even altruism, generating behavior that is more complex and, sometimes at least, more sensible”?
This practically demonstrates the opposite: that the reward is the important part! What rewards do experimenters usually offer? Pretty lousy ones. Why is it surprising that teens might not work as hard as conscientious saps—I mean, mature adults. (I am reminded of how money rewards can improve IQ test performance by half a standard deviation or so.) This is like the usual criticism of the PISA test scores: of course Americans will underperform, since not a single one of the test-takers cares about what score they get. Incentives matter, which is what I’ve been saying all along.
Wow. So your explanation for a clear-cut reward link is… they get distracted and can’t estimate risk as accurately.
gwern, looks like you haven’t been understanding a particular point.
The article reveals that reward is why the teenagers underestimate risk. The article reveals that teens perception of reward motivates their impulsiveness.
Indeed, that’s a point I agree with and have right from my very first rebut in this discussion. The incentives provide motivation for underestimating risk.
That’s one way of summarising what the article is proposing.
No. The value of a decision is gain minus cost; if the cost remains the same but the gain increases, then that can swing the value of a decision from negative to positive. Thus, they can be more impulsive while maintaining the same beliefs about risk.
I’ll unpack that… Thus, they can be overconfident while maintaining the same beliefs about risk. Being impulsive is being overconfident, impulsive is a lack of estimating risk, which is underestimating risk.
I think we’re looking at different dictionaries, so I’ll abandon the word impulsive and try with a more object-level phrase. They can drive less carefully while maintaining the same beliefs about risk.
Hilarious, the point you have abandoned has +2, whilst my point that forced the abandoning still has −1. anyways...
and if those same beliefs are already an underestimation of risk? strike 1, just clipped the outside of the plate.
Let’s unpack that last quote in context of driving… a-yawn-gain. they can drive less carefully. Less carefully, is about less care—what is “care”, that’s about
So they feel less concern, they attach less importance to driving. What’s the key word there, hmmm? “Less” well that’s a term, in context that goes with “under”-estimate. Do you think? I do. Strike 2 - straight up the middle of the plate. Batter says, I didn’t see that. Too bad says the ref.
Let’s examine the opposite side, to include a process for minimising disconfirmation bias. They drive less carefully. Ok, I’m flipping my brain. The less carefully has nothing to do with underestimating risk, actually in this flip it’s about overestimating risk… why do I say overestimate—well apparently that’s part of the argument opposing my viewpoint, check above.
Well what’s the dictionary say about what “Over estimate” means
hang on, hang on—overestimate = estimate something to be more important than it really is. Does overestimate sound at all like “less care”? No it doesn’t, contradiction found, conclusion is Driving less carefully is about underestimating risk. Strike 3. Yer outta here!
Now, here’s a thing. When the teenagers judge the reward highly, sufficiently highly to outweigh the risk of death—they have underestimated the risk. Perception of reward and risk are not in opposition, they go hand in hand.
Now let’s look at the rest of the sentence.
The implication in context, is that it’s reward driving the behaviour, supposedly being the entire reason for the behaviour, one significant context of the reward perception was peer involvement (see article). Let’s try that one.
They can drive less carefully with more people in the car, while maintaining the same beliefs about risk, because they perceive the rewards are higher.
That fits the counter argument to my viewpoint… but hang on, now with more people in the car the risk of death is multiplied. So factually the risk has increased—yet the behaviour is supposedly all due to the reward, now if the behaviour is truly all to do with the reward, then yep the teen has discounted the risk—for the risk increased and it’s not changing the behaviour.
So in that situation we’ve got another example where a teen has underestimated the risk due to a perception of a higher reward.
Am I being too anecdotal for you guys? Of course, discount outgroup behaviour whilst permitting the same ingroup. The article is itself filled with anecdotes… maybe we should just dismiss the entire article… stop press no no, don’t do that there’s no counter to my op then, lets just pick and choose the parts of it that support the counter, dismiss those that don’t—both in the research and the anecdotes.
Please by all means, chuck up the −1, I’m considering them badges of honour now.
I think we’re all well past the point of expecting you to actually read and/or seriously consider anything. However, in case other people are still reading this thread:
Ignore the fact that the parent abandoned a word, not a point. Karma never has been, and never ought to be, about deciding the correctness of arguments. Also, the usual litany of objections to people mindlessly invoking karma. Downvoted in accordance with my policy.
thanks for the link paper-machine, that’s quite a reasonable policy.
If I wasn’t downvoted to such a degree that I have no opportunity to downvote, I might consider implementing it. I’ll certainly use the concept to more thoroughly mitigate my annoyance about those unable to follow argument.
I’ll up vote you in accordance with my policy. Which is that if a person says a single useful thing, regardless of the rest of their post, I’ll give it a +1.
My reasoning for this policy is twofold. I reject the negativity that is encouraged by criticism and it’s aim of proving or showing that some one is wrong, rather than proving oneself right. I accept that when one focuses upon the positive, or worthwhile components of someone’s beliefs/actions/arguments one creates a valuable synergy that encourages a pathway towards truth and understanding.
Sometimes I don’t implement my own policy, but hey, it’s all a work in progress.
On reflection the sites name “lesswrong” really should have set off an alarm bell. I’m not interested particularly in being lesswrong. I am interested in being moreright.
Positive psychology and educational psychology have shown that positivity contributes more readily to learning than negativity.
The name is a deliberate choice, and it’s rooted in a belief in the difficulty of being completely right. It seeks to minimize arrogance and maximize doubt. At the start of every post, I try to imagine the ways that I am currently being wrong, and reduce those.
For example, my first reaction to this comment was to pull out my dictionary and argue that my use of “impulsive” was right, because I knew what I meant when I wrote it and could find that meaning in a dictionary. Instead, I decided that it takes two to communicate, and that if you disagreed with the implications of the word, it was the wrong word to choose. So I abandoned the word in an attempt to become less wrong.
I agree with you that positivity is generally more powerful than negativity; that’s why I try to be positive. Even so, negativity has its uses.
Kahneman wrote, in Thinking, Fast and Slow, that he wrote a book for gossips and critics — rather than a book for movers and shakers — because people are better at identifying other people’s biases than their own. I took this as meaning that his intention was to make his readers better equipped to criticize others’ biases correctly; and thus, to make people who wish to avoid being criticized need to debias themselves to accomplish this.
Presumably, part of the reason that a commenter would avoid making a dictionary argument on LW is if that commenter knows that LWers are unlikely to tolerate dictionary arguments. Teaching people about biases may lead them to be less tolerant of biases in others; and if we seek to avoid doing things that are odious to our fellows, we will be forced to check our own biases before someone else checks them for us.
Knowing about biases can hurt you chiefly if you’re the only one who’s sophisticated about biases and can argue fluently about them. But we should expect that in an environment with a raised sanity waterline, where everyone knows about biases and is prepared to point them out, people will perpetrate less egregious bias than in an environment where they can get away with it socially.
(OTOH, I don’t take this to excuse people saying “Nah nah nah, I caught you in a conjunction fallacy, you’re a poopy stupid head.” We should be intolerant of biased arguments, not of people who make them — so long as they’re learning.)
Good point. I normally don’t like accusing others of bias, and I will continue to try to refrain from doing so when I’m involved in something that looks like a debate, but I agree that it is useful information that should not be discouraged.
Vaniver. Mate. I accept that you believe
but I dispute that it achieves those. I believe instead that it maximises arrogance and maximises doubt in the others point of view, and in maximising doubt in the other persons view we minimize our doubt in our own view.
The belief that it’s difficult to be completely right, encourages people to look for that gap that is “wrong” and then drive a wedge into it and expand it until it’s all that’s being talked about.
If 95% is correct and 5% is wrong, criticising the 5% is a means to hurting the person—they have after all gotten 95% correct. It’s not rational to discount peoples feelings by focusing upon their error and ignoring their correctness. It’s destructive, it breaks people. Sure some few thrive on that kind of struggle—most don’t, again this is proven stuff. And I’m not going to post 10 freeking sources on that—all that’s doing for me is wasting my time and providing more opportunity for others to confirm their bias by fighting against it. If someone wants to find that information it’s out there.
When you (or anyone else) got a high distinction for a unit or assignment or exam, was that a moment to go, fuck—didn’t remember that a pre ganglionic fibre doesn’t look anything like a post gangleoic nerve (aka ds9), or was it a moment to leap for joy and go, you little ripper I got 95%!
I agree negativity has its uses, often it’s about “piss off” and go away, leave me alone; sometimes that’s useful, but you’ll note that those fall on the arrogant side of emotions—that of self. (this will get a wedge driven in it too, heck I could drive one in, but it remains somewhat true).
Vaniver, I’d consider it a positive discussion to talk about negativity. Would you mind explaining to me where “negativity has its uses”.
And to show that I consider the
viewpoint.
Yeh, ok I get that, when we apply the concept to ourselves then we are minimizing our arrogance and maximizing our doubt. And that’ll work. We’ll second guess ourselves, we’ll edit our posts, and re edit, and check our dictionaries and quote our sources and these are all useful things. They keep us honest. But what about when we apply those concepts to others—as is our tendency due to the self serving bias and the group serving bias?
There are many fields in which it is better to not try than to get 5% wrong. Would you go bungee jumping if it had a 5% failure rate?
Mostly in discouraging behavior. As well, an important rationality skill is updating on valuable information from sources you dislike; dealing with negativity in safer circumstances may help people learn to better deal with negativity in less safe circumstances.
Thanks for the post on negativity Vaniver. I wouldn’t go bungee jumping if it had a 5% failure rate.
That viewpoint can be considered as based upon Skinners model of Behaviourism, it’s been shown to be less effective for learning than being positive.
Makes sense—we tend to remember what we are emotionally engaged in and what is reinforced. When the negativity is associated with the 5%, what is reinforced is that a person is “wrong”, that’s associated with feelings of low self efficacy and tends to discourage (most) people from the topic. When that happens they regress—not progress, they tend to get even more wrong next time as they’ve not stayed engaged in the topic.
I agree that an important skill is to update ones information, however the discouragement that is provoked by negativity isn’t efficient in evoking updating. Confident people update their information, people who aren’t attacked have no need to defend and so they remain open, openess is the key attitude for updating information. Negativity destroys and/or minimizes confidence which contributes to closing a mind.
What negativity does, in context of learning, is to encourage secrecy, resentment, avoidance and close mindedness. Again this stuff is all known as a consequence of punishment, which is what negativity—as discouraging behaviour is associated with.
Apparently a more effective way forward is to model the behaviour that one wants to encourage and ignore the behaviour one wants to discourage—extinction.
I agree that saying “Good job putting down that toy” to my 22-month-old is more effective at reducing throwing of his toys than saying “Don’t throw toys.” And extinction works great on tantrums.
But you seem to be overgeneralizing the point a bit. When dealing with competent adults, saying “X is wrong” is an effective way of improving the listener’s beliefs. If the speaker doesn’t justify the assertion, that will and should effect whether the listener changes beliefs.
Of course, this is probably bad management style. We might explain that fact about people-management by invoking psychological bias, power imbalance, or something else. But here, we’re just having a discussion. No one is asserting a right to authority over anyone else.
Without necessarily asserting its truth, this just-so story/parable might help:
For various social reasons, popular kids and nerds have developed very different politeness rules. Popular kids are used to respect, so they accept everything that they hear. As a consequence, they think relatively carefully before saying something, because their experience is that what is said will be taken seriously. By contrast, nerds seldom receive social respect from their peers. Therefore, they seldom take what is said to them to heart. As a consequence, nerds don’t tend to think before they speak, because their experience is that the listener will filter out a fair amount of what is said. In brief, the popular filter at the mouth, the nerds filter at the ear.
This all works fine (more or less) when communicating within type. But you can imagine the problems when a nerd says something mean to a popular, expecting that it will be filtered out. Or a popular says something only vaguely nice, but the nerd removes negative that isn’t there and hears sincere and deep interest.
TimS, I’m glad we agree on several points, extinction and positive reinforcement of children. I wonder why these methods are espoused for children, yet tend to be used less for “competent adults”. Thanks for planting the seed that I might be overgeneralizing the point a bit, I’ll keep an eye on that.
I am reminded that saying “X is wrong” to an adult with a belief is ineffective in many circumstances, most notably the circumstance were the belief is a preconception, based in emotion or more specifically an irrational belief. Is this not one consequence of bias? That a person, in some cases/topics, won’t update their beliefs and indeed strengthen their belief in the counterargument against the updating. Presumably you’ve read http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/ Which alludes to how knowledge of bias can be used dismissively, i.e. an irrational use of a rationale.
“Why logical argument has never been successful at changing prejudices, beliefs, emotions or perceptions. Why these things can be changed only through perception.” De Bono, “I am right, you are wrong”. De Bono discusses this extensively.
If the belief is rational, and perhaps that’s one component of what you consider a “competent adult”, the adult could be more open to updating the fact/knowledge—yet even this situation has a wealth of counter examples, such that there is a term for it—belief perseverance.
In my experience unsolicited advice is rarely accepted regardless of its utility and veracity. Perhaps I communicate with many closed minds, or perhaps I am merely experiencing the availability heuristic in context of our discussion.
Checking dictionaries doesn’t really help eliminate bias. Just saying.
I could not parse this paragraph. It might be just that it was written in the Australian idiom or something; maybe quotation marks would help.
When you (or anyone else) get a high grade on a paper or assignment or exam, is that a moment to think “Darn- I didn’t remember (single obscure thing you got wrong),” or is it a moment to leap for joy and say “I got a 95! Ahaha!”?
Thanks!
thomblake, consider a high distinction as an A+ grade. Perhaps as along the lines of Newtonian Mechanics. It’s mostly right.
Sure, if you’re running in debate mode and thinking in terms of ‘sides’ or ‘us versus them’ and trying to ‘win’, then that might be something to do. Solution: don’t do that in the first place.
Don’t worry, everything you believe is almost certainly wrong—don’t expect to find yourself in the 95% correct state any time soon. We’re running on corrupted hardware in the first place, and nowhere near the end of science. We can reduce hardly any of our high-level concepts to their physical working parts.
First, fix those too.
Indeed, a valuable point. So what’s up with the score keeping system of LW then. It encourages thinking in terms of sides and competition. −1, not my side, +1 my side. −1 lost, +1 won.
lol. Fair enough. I would place the 95% not on some unknown scale of what is absolutely true—that science doesn’t yet know, but instead on the relative scale of what science currently knows. Does that make a difference to your point?
Yep, tough to become self less, yet still place enough value upon oneself to not be a door mat. Rudyard Kiplings “If” shows a pathway.
Eastern philosophy also has approaches—that are a thousand years ahead of western science.
Karma allows users to easily aggregate the community opinion of their comments, and allows busy users to prioritize which comments to read. I try to make more posts like my highly upvoted posts, and less posts like my highly downvoted posts. It is common to see discussions where both users are upvoted, or discussions where both users are downvoted. When there’s a large karma split between users, that’s a message from the community that the users are using different modes of discussion, and one is strongly preferred to the other.
Both positive and negative options are necessary so that posts which are loved by half of the users and hated by the other half of the users have a neutral score, rather than a high score. Similarly, posts which are disliked by many users should be different from posts that everyone is indifferent to.
What was the motivation behind this addition? Was it positive?
The motivation was to plant a seed… motivated by the +2 on my comment.
But why that seed in this conversation?
It is not uncommon to see scientists who have studied Eastern philosophy. Thus, how could Eastern philsophy be a thousand years ahead of science, when it is part of science?
To assist in debiasing the ageism that was being expressed in the conversation.
Yes. The difference in perspective probably explains why Eliezer thought Less Wrong was a good name, whereas you do not. Do not compare yourself to others; “The best physicist in ancient Greece could not calculate the path of a falling apple.”
It’s a hurdle to get past thinking of it in that way for some people, to be sure. It seems a worthwhile cost though, for an easy way to efficiently express approval/disapproval of a comment, combined with automatic hiding of really bad comments from casual readers.
While some people use them that way, voting should not generally be used to mean “I agree” or “I disagree”. The preferred interpretation is “I would like to see [more/fewer] comments like this one” (which may yet include agreement/disagreement, but they should be minor factors as compared to quality).
Parts of the parent comment that are particularly wrong:
paper-machine fairly well handled that one in terms of “Rule 1 of karma is you do not talk about karma”. Also, it was not a point that was abandoned, but a word. It is a common technique here to taboo a word whose definition is under dispute, since arguing about definitions is a waste of time.
Since you do not seem to understand, what happened there is that your ‘unpacking’ did not convey what Vaniver’s statement actually was intended to convey, so Vaniver replaced the word ‘impulsive’ with a more object-level description less amenable to misunderstanding.
This is not a good way to communicate. If you really don’t see what in this language would make someone take your arguments less seriously, someone could explain.
Perhaps you are not familiar with the study of risk, but the phrase “overestimate risk” means “estimate risk to be larger than it really is”, not “more important”. Either you are too ill-informed about risk analysis to be involved in this conversation, or you are trolling.
Also, appeals to the dictionary are just about the worst thing you can do in a substantive argument. If there is a misunderstanding, then definitions (whether from a dictionary or not) are useful for resolving the misunderstanding. They are really not useful to prove a point about what’s actually occurring.
This is a clear violation of the principle of charity.
And as a general rule, fixing your own bias is good, but accusing others of bias is bad. We must be particularly careful to remember that knowing about biases can hurt people. EDIT: Updating on this comment: It is useful to point out examples of bias in others; but do so in a way that does not score points in a debate, to be sure you’re not fooling yourself..
You should not. Votes are an indication of whether the readers of this site would like to see more comments like yours. If you’re getting feedback that you’re making comments we wouldn’t like on our site, and you consider that a ‘badge of honor’, then you’re a troll and should actually be banned entirely.
Gee, and I thought I’ve been acting like a cunt and damaging LW’s standards (for a few days lately). Nice to see that better people can behave themselves worse.
I hope this was deliberate irony to be funny.
That is, the ‘stereotype’ you are offended by seems to be ‘respectable sober adult’, and you respond with an accusation of “immature nonsense”.
I’m disappointed that gwern, a presumably respected poster, going by his karma and post count, jumps so quickly to typical internet trash talk.
Is it typical internet trash talk to suggest that you enjoy risks less than others might?
I’m interested to learn how to find the Internet you’re familiar with—it sounds remarkably more civil than the one I use.
If anything, your “be a good chap” seemed condescending well beyond any ‘trash talk’ qualities that might be present in what gwern wrote.
It’s not especially reckless. It is ‘Reckless Driving’. Disobeying the law to that degree for little payoff is a dumbass move, not so much a physically reckless one.
I second Robin’s question.
I’d also like to learn whether the experimental finding holds for a wide variety of decisions. (Eliezer mentioned only picking a job offer.)
Aren’t people consistently underconfident when it comes to their money? Everybody does something, invest in something, but aren’t really sure about it even after they’ve done it. It’s in its most extreme when it comes to the stock market.
Another instance is when people approach members of the opposite sex who they think are attractive. They consistently misunderestimate themselves.
Otherwise it depends on what their used to, like people in technology are underconfident when it comes to negotiation and so forth.
In the case of Divorce, the reasons cannot always be taken as evidence for the marriage having been a mistake to begin with.
Things happen and people change.
This is an interesting idea and doesn’t surprise me given thin-slicing behavior and the like. But the research itself seems a little thin. Where is the actual testing versus a control group? What about other decisions that don’t involve jobs?
Also, I think probably we know what we will choose 99% of the time because we make the decision instantaneously. The real question is whether we do this even on decisions that we don’t consciously know what we are going to choose. Are we as accurate in those decisions?
It is nice to have a clear example of where people are consistently underconfident. Are there others?
People tend to take into account the magnitude of evidence (how extreme is the value?) while ignoring its reliability, and they also tend to be bad at combining multiple pieces of evidence. So another good way to generate underconfidence is to give people lots of small pieces of reliable evidence. (I believe it’s in the same paper, “The Weighing of Evidence and the Determinants of Confidence”.)
I recall having an argument over dinner with a friendly acquaintance about an unimportant but interesting problem. I thought about it for few days and decided he was right. I’ve hated him ever since.
And now we’re curious. What was the problem?
Are you they as available, in your heuristic estimate of your competence?
I’m unable to parse this sentence.
Drop the “you” and see the linked “Availability Heuristic”.
I used to have a button that said “If you haven’t changed your mind lately, how do you know you’ve still got one?” I really liked that sentiment.
It’s very easy to get comfortable with our opinions and beliefs, and uncomfortable about any challenge to them. As I’ve posted elsewhere, we often identify our “selves” with our “beliefs”, as if they “were” us. Once we can separate our idea of “self” as different from “that which our self currently believes”, it becomes easier to entertain other thoughts, and challenges from others, to our beliefs and opinions. If we are comfortable and secure in our own selves, then we can discuss dispassionately the ideas that contradict what we have previously held to be true. It is the only way that we can learn, that we can take in new and different ideas without that being a blow to our ego. Identifying our selves with our thoughts, opinions, beliefs, blocks us, threatens us, so that we get stuck with our old ways of doing things and framing things, and we don’t grow and change with ease.
A lot of people probably already know that, it’s a familiar “deep wisdom”, but anyway: you can use this not-changing of your mind to help you with seemingly complicated decisions that you ponder over for days. Simply assign the possible answers and flip a coin (or roll a dice, if you need more than 2). It doesn’t matter what the result is, but depending on wether it matches your already-made decision you will either immediately reject the coin’s “answer” or not. That tells you what your first decision was, unclouded by any attempts to justify the other option(s).
Now, if you’ve trained your intuition (aka have the right set of Cached Thoughts), that answer will be the correct or better one. Or, as has happened to me more than once, you realize that both alternatives are actually wrong and your mind already came up with a better solution.
Without knowing the terms or technical explanation for it, this is what I have always been doing automatically for as long as I can remember making decisions conciously (generously apply confidence margin and overconfidence moderation proportional to applicable biases). However, upon reading the sequences here, I realize that several problems I have identified in my thought strategies actually stem from my reliance on training my intuition and subconscious for what I now know to be simply better cached thoughts.
It turns out that no matter how well you organize and train your Caches and other automatic thinking, belief-forming and decision-making processes, some structural human biases are virtually impossible to eliminate by strictly relying on this method. What’s more, by having relied on this for so long, I find myself having even more difficulty training my mind to think better.
That’s true. Matters are not helped by the value society places on commitment and consistency. When we do, in fact, change our minds, we are more often than not labeled as “wishy-washy,” or some similarly derogatory term.
This article reminds me of the movie “Inception”… once an idea is planted it is hard to get it out.
As Eliezer says, on short time scales (days, weeks, months) we change our minds less often than we expect to. However, it’s worth noting that, on larger time scales (years, decades) the opposite seems to be true. Also, our emotional state changes more frequently than we expect it to, even on short time scales. I can’t seem to recall my exact source on this second point at the moment (I think it was some video we watched in my high school psychology class), though, anecdotally, I’ve observed it to be true in my own life. Like, when I’m feeling good, I may think thoughts like “I’m a generally happy person”, or “my current lifestyle is working very well, and I should not change it”, which are falsifiable claims/predictions that are based on the highly questionable assumption that my current emotional state will persist into both the near and distant future. Similarly, I may think the negations of such thoughts when I’m feeling bad. As a result, I have to remind myself to be extra skeptical/critical of falsifiable claims/predictions that agree too strongly my current emotional state.
I would say that the study by Griffin and Tversky is incomplete. The way I see it, we have an inner “scale” of the validity of evidence and decide based on that. As was pointed out in one of the previous posts, we should bet on an event 100% of the time if the event is more likely than the alternatives. Something similar is happening here, where if we are more than 50% sure that job A is better than job B, we should pick job A. Given that the participants were 66% sure, this would mean that there is a low a priori probability for them to change their minds. If we assume a normal distribution for the “scale” of evidence in our brains, we get to the fact that there is indeed a very small chance of the participants changing their minds, obviously being a 2 sigma event.
If my hypothesis is correct, in a new study in which the participants are a priori 50% sure about the job they want to choose, they should change their minds more than the 4% in this example, much more actually. Given that we do have a mechanism in our minds that makes us stick to our decisions, especially in cases when we are around 50% sure, which stops us from changing our minds constantly and behaving erratically, I would hypothesize that the participants wouldn’t change their minds 50% of the time, but it would probably be in the region of 40% − 50%. I would also expect that when the participants are 80% or 90% sure a priori, that they would still change their minds in maybe 1% or 2% of the cases because we are usually more sure of our answers than we should be.
All in all, I think that it is perfectly rational that if you are 66% sure about something you make that decision in 90% to 99% of the cases. Being 80% sure about something should not mean that you should choose the alternative in 20% of the cases.
I think “The Bottom Line” here is meant to link to the essay.
BLUF: The cited paper doesn’t support the claim that we change our minds less often than we think, and overall it and a paper it cites point the other way. A better claim is that we change our minds less often than we should.
The cited paper is freely downloadable: The weighing of evidence and the determinants of confidence. Here is the sentence immediately following the quote:
The citation is to Vallone, R. P., Griffin, D. W., Lin, S., & Ross, L. (1990). Overconfident Prediction of Future Actions and Outcomes by Self and Others. Journal of Personality and Social Psychology, 58, 582-592.
Self-predictions are predictions
Occam’s Razor says that our mainline prior should be that self-predictions behave like other predictions. These are old papers and include a small number of small studies, so probably they don’t shift beliefs all that much. However much you weigh them, I think they weigh in favor of Occam’s Razor.
In Vallone 1990, 92 Students were asked to prediction their future actions later in the academic year, and those of their roommate. An example prediction, will you go to the beach? The greater time between prediction and result makes this a more challenging self-prediction. Students were 78.7% confident and 69.1% accurate for self-prediction, compared to 77.4% confident and 66.3% accurate for other-prediction. Perhaps evidence for “we change our minds more often than we think”.
I think more striking is that both self and other predictions had a similar 10% overconfidence. They also had similar patterns of overconfidence—the overconfidence was clearest when it went against the base rate, and students underweighted the base rate when making both self-predictions and other-predictions.
As well as Occam’s Razor, self-predictions are inescapably also predicting other future events. Consider the job offer case study. Will one of the employers increase the compensation during negotiation? What will they find out when they research the job locations? What advice will they receive from their friends and family? Conversely, many other-predictions are entangled with self-predictions. It’s hard to conceive how we could be underconfident in self-prediction, overconfident in other-prediction, and not notice when the two biases clash.
Short-term self-predictions are easier
In Griffin 1992, the first test of “self vs other” calibration is study 4. This is a set of cooperate/defect tasks where the 24 players predict their future actions and their partner’s future actions. They were 84% confident and 81% accurate in self-prediction but 83% confident and 68% accurate in other-prediction. So they were well-calibrated for self-prediction, and over-confident for other-prediction. Perhaps evidence for “we change our minds as often as we think”.
But self-prediction in this game is much, much easier than other-prediction. 81% accuracy is surprisingly low—I guess that players were choosing a non-deterministic strategy (eg, defect 20% of the time) or were choosing to defect based in part on seeing their partner. But I have a much better idea of whether I am going to cooperate or defect in a game like that, because I know myself a little, and I know other people less.
The next study in Griffin 1992 is a deliberate test of the impacts of difficulty on calibration, where they find:
Self-predictions are not self-recall
If someone says “we change our minds less often than we think”, they could mean one or more of:
We change our minds less often than we predict that we will
We change our minds less often that we model that we do
We change our minds less often that we recall that we did
If an agent has a bad self-model, it will make bad self-predictions (unless its mistakes cancel out). If an agent has bad self-recall it will build a bad self-model (unless it builds its self-model iteratively). But if an agent makes bad self-predictions, we can’t say anything about its self-model or self-recall, because all the bugs can be in its prediction engine.
Instead, Trapped Priors
This post precedes the excellent advice to Hold Off on Proposing Solutions. But the correct basis for that advice is not that “we change our minds less often than we think”. Rather, what we need to solve is that we change our minds less often than we should.
In Trapped Priors as a basic problem of rationality, Scott Alexander explains one model for how we can become stuck with inaccurate beliefs and find it difficult to change our beliefs. In these examples, the person with the trapped prior also believes that they are unlikely to change their beliefs.
The person who has a phobia of dogs believes that they will continue to be scared of dogs.
The Republican who thinks Democrats can’t be trusted believes that they will continue to distrust Democrats.
The opponent of capital punishment believes that they will continue to oppose capital punishment.
Reflections
I took this post on faith when I first read it, and found it useful. Then I realized that, just from the quote, the claimed study doesn’t support the post, people considering two job offers are not “within half a second of hearing the question”. It was that confusion that pushed me to download the paper. I was surprised to find the Vallone citation that led me to draw the opposite conclusion. I’m not quite sure what happened in October 2007 (and “on August 1st, 2003, at around 3 o’clock in the afternoon”). Still, the sequence continues to stand with one word changed from “think” to “should”.