I already mentioned this as a comment to another post, but it’s worth repeating here: The human brain has evolved some “dedicated hardware” for accelerating certain tasks.
I already mentioned in that other post that one such hardware was for recognizing faces, and that false-positives generated by this hardware caused us have a feeling of hauntedness and ghosts (because the brain receives a subconscious signal indicating the presence of a face, but consciously looking around we see no one around).
Another such hardware (which I only briefly alluded to in the other post) was “agency detection”. I.e. trying to figure out whether a certain event occurred “naturally”, or because another agent (a friend, a foe, or a neutral?) caused it to happen. False positives from this hardware would cause us to “detect agency” where none was, and if the event seems something way out of the capacity for a human to control, and since humans seem to be the most powerful “natural” beings in the universe, the agent in question must be something supernatural, like God.
I don’t have all the details worked out, but it seems plausible that agency-detection could have been naturally selected for, perhaps to be able to integrate better into a society, and to help with knowing when it is appropriate to cooperate and when it is appropriate to defect. It’s a useful skill to be able to differentiate between “something good happened to me, because this person wanted something good to happen to me and made it happen. They cooperated (successfully). I should become their friend.” versus “something good happened to me, despite this person wanting something bad to happen to me, but it backfired on them. They defected (unsuccessfully). I should be wary of them.”
From there, bring in Anna Salamon and Steve Rayhawkideas about tag-along selection, and it seems like religion really may be a tag-along evolutionary attribute.
Anyway, I used to be scared of ghosts and the dark and stuff like that, but once I found out about the face-recognition hardware and its false positives (and other hardware, such as sound-location) this fear has almost completely disappeared almost instantaneously.
I was already atheist or agnostic (depending on what definitions you assign to those words) when I found out about the hardware false-positives, so I can’t say for sure whether had I been religious, this would have converted me.
But if it worked at making me stop “believing”[1] in ghosts, then perhaps it could work at making people stop beliving in God as well.
1: Here I am using the term “believe” in the sense of Yvain’s post on haunted rationalists. Like everyone else, I would assert that ghosts didn’t really exist, and would be willing to make a wager that they didn’t exist. And yet, like everyone else, I was still scared of them.
Excellent description. Reminds me a little of Richard Dawkins in “The God Delusion,” explaining how otherwise useful brain hardware ‘misfires’ and leads to religious belief.
You mention agency detection as one of the potential modules that misfire to bring about religious belief. I think we can generalize that a little more and say fairly conclusively that the ability to discern cause-and-effect was favored by natural selection, and given limited mental resources, it certainly favored errors where cause was perceived even if there was none, rather than the opposite. In the simplest scenario, imagine hearing a rustling in the bushes: you’re better off always assuming there’s a cause and checking for predators and enemies. If you wrote it off as nothing, you’d soon be removed from the gene pool.
Relatedly, there is evidence that the parts of the brain responsible for our ability to picture absent or fictional people are the same ones used in religious thought. It’s understandable why these were selected for: if you come back to your cave to find it destroyed or stolen, it helps to imagine the neighboring tribe raiding it.
These two mechanisms seem to apply to religion: people see a cause behind the most mundane events, especially rare or unusual events. Of course they disregard the giant sample size of times such events failed to happen, but those are of course less salient. It’s a quick hop to imagining an absent/hidden/fictional person -and agent—responsible for causing these events.
Undermining religion on rational grounds must thus begin with destroying the idea that there is necessarily an agent intentionally causing every effect. This should get easier: market economies are famously results of human action, but not of human design—any given result may be the effect of an agent’s action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.
It would probably also help to remind people of sample size. I recently heard a story by a religious believer who based her faith on her grandfather’s survival in the Korean war, which happened against very high odds. Someone like that must be reminded that many people did not survive similar incidents, and that there is likely no force behind it but random chance, much like, if life is possible on 0.000000001% of planets, and exists on the same percentage of those, given enough planets you will have life.
Agency misfires and causal misfires can help to suggest religion. For that suggestion to get past your filters, the sanity waterline has to be low. I don’t invent a new religion every time I see a face in the clouds or three dandelions lined up in a row.
Neither do I, though I’m often tempted to find a reason for why my iPod’s shuffle function “chose” a particular song at a particular time. [“Mad World” right now.]
It seems that our mental ‘hardware’ is very susceptible to agency and causal misfires, leaving an opening for something like religious belief. Robin explained religious activities and beliefs as important in group bonding [http://www.overcomingbias.com/2009/01/why-fiction-lies.html], but the fact that religion arose may just be a historical accident. It’s likely that something would have arisen in the same place as a group bonding mechanism—perhaps religion just found the gap first. From an individual perspective, this hardly means that the sanity waterline is low. In fact, evolutionarily speaking, playing along may be the sanest thing to do.
The relevant sentence from Robin’s post: “Social life is all about signaling our abilities and cooperativeness, and discerning such signals from others.” As Norman points out [link below], self-deception makes our signals more credible, since we don’t have to act as believers if we are believers. As a result, in the ancestral environment at least, it’s “sane” to believe what others believe and not subject it to a conscious and costly rationality analysis. You’d basically expend resources to find out a truth that would make it more difficult for me to deceive others, which is costly in itself.
Of course today, the payoff from signaling group membership is far lower than ever before, which is why religious belief, and especially costly religious activities, violate sanity. Which, perhaps, is why secularism is on the rise:
http://www.theatlantic.com/doc/200803/secularism
I think this is a good answer to Eliezer’s thought experiment. Teach those budding rationalists about the human desire to conform even in the face of the prima facie ridiculousness of the prevailing beliefs.
Teach them about greens and blues; teach them about Easter Islanders building statues with their last failing stock of resources (or is that too close to teaching about religion?). Teach them how common the pattern is: when something is all around you, you are less likely to doubt its wisdom.
Human rationality (at least for now) is still built on the blocks and modules provided to us by evolution. They can lead us astray, like the “posit agency” module firing when no agent is there. But they can also be powerful correctives. A pattern-recognizing module is a dangerous thing when we create imaginary patterns… but, oh boy, when there actually is a pattern there, let that module rip!
If I recall, that trope corresponds to prior points stating that humans are driven by evolutionary heuristics to assign agency based causality to a random probability distribution. However, the laconic does summarize that fallacy rather well. Narrative examples such as tropes do tend to ease comprehension.
+1 Karma
This should get easier: market economies are famously results of human action, but not of human design—any given result may be the effect of an agent’s action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.
The conspiracy theory of economics remains prevalent, however, and very difficult to disabuse people of. So I’m not sure this is that helpful a handle to disabuse people of religion.
I already mentioned this as a comment to another post, but it’s worth repeating here: The human brain has evolved some “dedicated hardware” for accelerating certain tasks.
I already mentioned in that other post that one such hardware was for recognizing faces, and that false-positives generated by this hardware caused us have a feeling of hauntedness and ghosts (because the brain receives a subconscious signal indicating the presence of a face, but consciously looking around we see no one around).
Another such hardware (which I only briefly alluded to in the other post) was “agency detection”. I.e. trying to figure out whether a certain event occurred “naturally”, or because another agent (a friend, a foe, or a neutral?) caused it to happen. False positives from this hardware would cause us to “detect agency” where none was, and if the event seems something way out of the capacity for a human to control, and since humans seem to be the most powerful “natural” beings in the universe, the agent in question must be something supernatural, like God.
I don’t have all the details worked out, but it seems plausible that agency-detection could have been naturally selected for, perhaps to be able to integrate better into a society, and to help with knowing when it is appropriate to cooperate and when it is appropriate to defect. It’s a useful skill to be able to differentiate between “something good happened to me, because this person wanted something good to happen to me and made it happen. They cooperated (successfully). I should become their friend.” versus “something good happened to me, despite this person wanting something bad to happen to me, but it backfired on them. They defected (unsuccessfully). I should be wary of them.”
From there, bring in Anna Salamon and Steve Rayhawkideas about tag-along selection, and it seems like religion really may be a tag-along evolutionary attribute.
Anyway, I used to be scared of ghosts and the dark and stuff like that, but once I found out about the face-recognition hardware and its false positives (and other hardware, such as sound-location) this fear has almost completely disappeared almost instantaneously.
I was already atheist or agnostic (depending on what definitions you assign to those words) when I found out about the hardware false-positives, so I can’t say for sure whether had I been religious, this would have converted me.
But if it worked at making me stop “believing”[1] in ghosts, then perhaps it could work at making people stop beliving in God as well.
1: Here I am using the term “believe” in the sense of Yvain’s post on haunted rationalists. Like everyone else, I would assert that ghosts didn’t really exist, and would be willing to make a wager that they didn’t exist. And yet, like everyone else, I was still scared of them.
Excellent description. Reminds me a little of Richard Dawkins in “The God Delusion,” explaining how otherwise useful brain hardware ‘misfires’ and leads to religious belief.
You mention agency detection as one of the potential modules that misfire to bring about religious belief. I think we can generalize that a little more and say fairly conclusively that the ability to discern cause-and-effect was favored by natural selection, and given limited mental resources, it certainly favored errors where cause was perceived even if there was none, rather than the opposite. In the simplest scenario, imagine hearing a rustling in the bushes: you’re better off always assuming there’s a cause and checking for predators and enemies. If you wrote it off as nothing, you’d soon be removed from the gene pool.
Relatedly, there is evidence that the parts of the brain responsible for our ability to picture absent or fictional people are the same ones used in religious thought. It’s understandable why these were selected for: if you come back to your cave to find it destroyed or stolen, it helps to imagine the neighboring tribe raiding it.
These two mechanisms seem to apply to religion: people see a cause behind the most mundane events, especially rare or unusual events. Of course they disregard the giant sample size of times such events failed to happen, but those are of course less salient. It’s a quick hop to imagining an absent/hidden/fictional person -and agent—responsible for causing these events.
Undermining religion on rational grounds must thus begin with destroying the idea that there is necessarily an agent intentionally causing every effect. This should get easier: market economies are famously results of human action, but not of human design—any given result may be the effect of an agent’s action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.
It would probably also help to remind people of sample size. I recently heard a story by a religious believer who based her faith on her grandfather’s survival in the Korean war, which happened against very high odds. Someone like that must be reminded that many people did not survive similar incidents, and that there is likely no force behind it but random chance, much like, if life is possible on 0.000000001% of planets, and exists on the same percentage of those, given enough planets you will have life.
Agency misfires and causal misfires can help to suggest religion. For that suggestion to get past your filters, the sanity waterline has to be low. I don’t invent a new religion every time I see a face in the clouds or three dandelions lined up in a row.
Neither do I, though I’m often tempted to find a reason for why my iPod’s shuffle function “chose” a particular song at a particular time. [“Mad World” right now.]
It seems that our mental ‘hardware’ is very susceptible to agency and causal misfires, leaving an opening for something like religious belief. Robin explained religious activities and beliefs as important in group bonding [http://www.overcomingbias.com/2009/01/why-fiction-lies.html], but the fact that religion arose may just be a historical accident. It’s likely that something would have arisen in the same place as a group bonding mechanism—perhaps religion just found the gap first. From an individual perspective, this hardly means that the sanity waterline is low. In fact, evolutionarily speaking, playing along may be the sanest thing to do.
The relevant sentence from Robin’s post: “Social life is all about signaling our abilities and cooperativeness, and discerning such signals from others.” As Norman points out [link below], self-deception makes our signals more credible, since we don’t have to act as believers if we are believers. As a result, in the ancestral environment at least, it’s “sane” to believe what others believe and not subject it to a conscious and costly rationality analysis. You’d basically expend resources to find out a truth that would make it more difficult for me to deceive others, which is costly in itself.
Of course today, the payoff from signaling group membership is far lower than ever before, which is why religious belief, and especially costly religious activities, violate sanity. Which, perhaps, is why secularism is on the rise: http://www.theatlantic.com/doc/200803/secularism
I think this is a good answer to Eliezer’s thought experiment. Teach those budding rationalists about the human desire to conform even in the face of the prima facie ridiculousness of the prevailing beliefs.
Teach them about greens and blues; teach them about Easter Islanders building statues with their last failing stock of resources (or is that too close to teaching about religion?). Teach them how common the pattern is: when something is all around you, you are less likely to doubt its wisdom.
Human rationality (at least for now) is still built on the blocks and modules provided to us by evolution. They can lead us astray, like the “posit agency” module firing when no agent is there. But they can also be powerful correctives. A pattern-recognizing module is a dangerous thing when we create imaginary patterns… but, oh boy, when there actually is a pattern there, let that module rip!
For reference:
http://tvtropes.org/pmwiki/pmwiki.php/Main/RandomNumberGod
If I recall, that trope corresponds to prior points stating that humans are driven by evolutionary heuristics to assign agency based causality to a random probability distribution. However, the laconic does summarize that fallacy rather well. Narrative examples such as tropes do tend to ease comprehension. +1 Karma
The conspiracy theory of economics remains prevalent, however, and very difficult to disabuse people of. So I’m not sure this is that helpful a handle to disabuse people of religion.
Good speculation