One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting. I happened to think of one exception: if you expect that something will cause a change in your beliefs when it shouldn’t, because it uses strong rhetorical techniques (e.g. highlighting highly unrepresentative examples) whose effect you can’t fully eliminate even when you know that they’re there.
(I have a feeling that this might have been discussed before, but I don’t remember where in that case.)
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting.
It’s more like, if you expect (in the statistical sense) that you will rationally update your beliefs in some direction upon receiving some piece of evidence, then your current probability assignments are incoherent, and you should update on pain of irrationality. It’s not just that you might as well update now instead of waiting. But this only applies if your expected future update is one that you rationally endorse. If you know that your future update will be irrational, that it is not going to be the appropriate response to the evidence presented, then your failure to update right now is not necessarily irrational. The proof of incoherence does not go through in this case.
if you expect that something will cause a change in your beliefs when it shouldn’t
This seems like a breakdown in reflective consistency. Shouldn’t you try to actively counter/avoid the expected irrationality pressure, instead of (irrationally and meekly) waiting for it to nudge your mind in a wrong direction? Is there a specific example that prompted your comment? I can think of some cases offhand. Say, you work at a failing company and you are required to attend an all-hands pep talk by the CEO, who wants to keep the employee morale up. There are multiple ways to avoid being swayed by rhetoric: not listening, writing down possible arguments and counter arguments in advance, listing the likely biases and fallacies the speaker will play on and making a point of identifying and writing them down in real time, etc.
No specific examples originally, but Yvain had a nice discussion about persuasive crackpot theories in his old blog (now friends-locked, but I think that sharing the below excerpt is okay), which seems like a good example:
When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology rather than the universally reviled crackpots who write books about Venus being a comet.
I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don’t even try.
As for trying to actively counter the effect of the misleading rhetoric, one can certainly try, but they should also keep in mind that we’re generally quite bad at this. E.g. while not exactly the same thing, this bit from Misinformation and its Correction seems relevant:
A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011).
There’s an intermediate step of believing things because you expect them to be true (rather than merely convincing). It’s fully corrected if you use correlates-to-truth over convincingness for the update.
In other words, if you expect the fifth column more if you see sabotage, and more if you don’t see sabotage, then you can reduce that into just expecting the fifth column more.
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting. I happened to think of one exception: if you expect that something will cause a change in your beliefs when it shouldn’t, because it uses strong rhetorical techniques (e.g. highlighting highly unrepresentative examples) whose effect you can’t fully eliminate even when you know that they’re there.
(I have a feeling that this might have been discussed before, but I don’t remember where in that case.)
It’s more like, if you expect (in the statistical sense) that you will rationally update your beliefs in some direction upon receiving some piece of evidence, then your current probability assignments are incoherent, and you should update on pain of irrationality. It’s not just that you might as well update now instead of waiting. But this only applies if your expected future update is one that you rationally endorse. If you know that your future update will be irrational, that it is not going to be the appropriate response to the evidence presented, then your failure to update right now is not necessarily irrational. The proof of incoherence does not go through in this case.
This seems like a breakdown in reflective consistency. Shouldn’t you try to actively counter/avoid the expected irrationality pressure, instead of (irrationally and meekly) waiting for it to nudge your mind in a wrong direction? Is there a specific example that prompted your comment? I can think of some cases offhand. Say, you work at a failing company and you are required to attend an all-hands pep talk by the CEO, who wants to keep the employee morale up. There are multiple ways to avoid being swayed by rhetoric: not listening, writing down possible arguments and counter arguments in advance, listing the likely biases and fallacies the speaker will play on and making a point of identifying and writing them down in real time, etc.
No specific examples originally, but Yvain had a nice discussion about persuasive crackpot theories in his old blog (now friends-locked, but I think that sharing the below excerpt is okay), which seems like a good example:
As for trying to actively counter the effect of the misleading rhetoric, one can certainly try, but they should also keep in mind that we’re generally quite bad at this. E.g. while not exactly the same thing, this bit from Misinformation and its Correction seems relevant:
Sure, you should try to counter. But sometimes the costs of doing that are higher than the losses that will result from an incorrect belief.
This seems related, though not exactly what you are asking for.
There’s an intermediate step of believing things because you expect them to be true (rather than merely convincing). It’s fully corrected if you use correlates-to-truth over convincingness for the update.
In other words, if you expect the fifth column more if you see sabotage, and more if you don’t see sabotage, then you can reduce that into just expecting the fifth column more.