I’m not sure I’d interpret the results quite like that. “We believe everything we’re told” seems like a bit of an exaggeration. I don’t have a deep-seated, strong belief that 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 ≈ 2,250. That’s just a quick guess, based on the information currently floating around in my skull. If you asked me for another guess tomorrow, I might give a radically different answer.
It seems like we just encounter a lot of information over the years, and it all gets tossed into the giant box that is our skull. Then something comes up (something we see or hear, a word, an idea we have… anything) and our brain quickly rummages through the box for related concepts. It’s not a comprehensive search by any means; it’s just a quick search that is heavily bias toward concepts at the top of the box (those added or used most recently). This is generally a useful bias, since it’s likely to turn up relevant information quickly.
If some of the concepts that come up during the search have a [FALSE] tag attached, we’ll ignore them, or maybe even treat them as counter-evidence to whatever we’re evaluating. The problem is that sometimes we’re only half-listening when we encountered certain information, and never attached a [FALSE] tag. Or maybe the [FALSE] tag wasn’t attached well enough to stick. For example: “I remember two of my geeky friends arguing about whether glass was a slow-flowing liquid or a true solid, but I forget who wound up being correct when they finally googled it.”
But there are all sorts of other things attached to each bit of knowledge that’s floating around in our brain, besides just a simplistic [FALSE] tag. We can remember where we heard it (college class, hearsay, scifi book, newspaper, pier-reviewed publication, etc.) and maybe even how we felt about it at the time (Were we surprised to learn it? Still skeptical afterward?). Ideally, we’ll remember a lot of supporting evidence and ideas, and a few attempts to prove the notion false and how the tests failed.
The things we think of as our core beliefs tend not to be made up of only random hearsay. They tend to be based on ideas we are pretty sure about. They may have accumulated a bunch of week supporting evidence in addition, over the years, due to confirmation bias. Even weaker beliefs (like those based on some source we read once and were pretty sure was reputable) require a basic amount of evidence.
Perhaps my argument is only about the meaning of the word “belief”. After all, it seems arbitrary to declare some standard for our guesses at which point we are willing to call one a belief instead of a best guess. But in practice, that seems to be exactly what we do. I try to set my bar fairly high, and reserve judgement on a situation until I’m reasonably confident, but other people seem willing to form opinions on very little evidence, at the risk of turning out to be wrong. And That’s fine, so long as our opinions are still evidence based. It doesn’t matter if the threshold is p>.99 or p>.95 or even p>.75, so long as we can agree on p and base our decisions on it.
But concentrating on errors, fallacies, heuristics, and biases that affect mainly our guesses seems like it would have limited value. Perhaps they are a way of catching errors early, before they propagate into deeply held beliefs. Or perhaps they would be useful for avoiding continuously adding small bits of support to our deeper beliefs (a form of confirmation bias). It would be extremely interesting to do a longitudinal case study, and track the development of a bad idea, from formation to conclusion. Say, from the journal of someone who came to believe in conspiracy theories or something similar. I wonder to what degree our natural human biases influence the long-term development of our opinions.
I’m not sure I’d interpret the results quite like that. “We believe everything we’re told” seems like a bit of an exaggeration. I don’t have a deep-seated, strong belief that 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 ≈ 2,250. That’s just a quick guess, based on the information currently floating around in my skull. If you asked me for another guess tomorrow, I might give a radically different answer.
It seems like we just encounter a lot of information over the years, and it all gets tossed into the giant box that is our skull. Then something comes up (something we see or hear, a word, an idea we have… anything) and our brain quickly rummages through the box for related concepts. It’s not a comprehensive search by any means; it’s just a quick search that is heavily bias toward concepts at the top of the box (those added or used most recently). This is generally a useful bias, since it’s likely to turn up relevant information quickly.
If some of the concepts that come up during the search have a [FALSE] tag attached, we’ll ignore them, or maybe even treat them as counter-evidence to whatever we’re evaluating. The problem is that sometimes we’re only half-listening when we encountered certain information, and never attached a [FALSE] tag. Or maybe the [FALSE] tag wasn’t attached well enough to stick. For example: “I remember two of my geeky friends arguing about whether glass was a slow-flowing liquid or a true solid, but I forget who wound up being correct when they finally googled it.”
But there are all sorts of other things attached to each bit of knowledge that’s floating around in our brain, besides just a simplistic [FALSE] tag. We can remember where we heard it (college class, hearsay, scifi book, newspaper, pier-reviewed publication, etc.) and maybe even how we felt about it at the time (Were we surprised to learn it? Still skeptical afterward?). Ideally, we’ll remember a lot of supporting evidence and ideas, and a few attempts to prove the notion false and how the tests failed.
The things we think of as our core beliefs tend not to be made up of only random hearsay. They tend to be based on ideas we are pretty sure about. They may have accumulated a bunch of week supporting evidence in addition, over the years, due to confirmation bias. Even weaker beliefs (like those based on some source we read once and were pretty sure was reputable) require a basic amount of evidence.
Perhaps my argument is only about the meaning of the word “belief”. After all, it seems arbitrary to declare some standard for our guesses at which point we are willing to call one a belief instead of a best guess. But in practice, that seems to be exactly what we do. I try to set my bar fairly high, and reserve judgement on a situation until I’m reasonably confident, but other people seem willing to form opinions on very little evidence, at the risk of turning out to be wrong. And That’s fine, so long as our opinions are still evidence based. It doesn’t matter if the threshold is p>.99 or p>.95 or even p>.75, so long as we can agree on p and base our decisions on it.
But concentrating on errors, fallacies, heuristics, and biases that affect mainly our guesses seems like it would have limited value. Perhaps they are a way of catching errors early, before they propagate into deeply held beliefs. Or perhaps they would be useful for avoiding continuously adding small bits of support to our deeper beliefs (a form of confirmation bias). It would be extremely interesting to do a longitudinal case study, and track the development of a bad idea, from formation to conclusion. Say, from the journal of someone who came to believe in conspiracy theories or something similar. I wonder to what degree our natural human biases influence the long-term development of our opinions.