There was a recent LW discussion post about the phenomenon where people presented with evidence against their position end up believing their original position more strongly. The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly. Does somebody know which discussion post I’m talking about? I’m not finding it.
The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly.
Are you thinking of the one where people updated only to consider dangers less likely than their initial estimate?
There was a recent LW discussion post about the phenomenon where people presented with evidence against their position end up believing their original position more strongly. The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly. Does somebody know which discussion post I’m talking about? I’m not finding it.
Was it this one?
’Twas!
I’m not sure about the LW discussion post, but the phenomenon that you describe closely resembles Nyhan and Reifler’s ‘backfire effect’, which I think reached a popular audience when David McRaney wrote about it on You Are Not So Smart.
ETA: Googling LW for “backfire effect” and nyhan doesn’t turn up any recent post, so maybe this is not what you are looking for.
I’m not in a position to Google easily, but “belief polarization” is another term for this, I think.
Are you thinking of the one where people updated only to consider dangers less likely than their initial estimate?
http://lesswrong.com/lw/814/interesting_article_about_optimism/
That’s not what I was thinking of, but interesting nonetheless.