Sorry if I’m about 10 years late to this conversation, if this exact idea has already been propagated and responded to in detail, feel free to point me towards any existing resources.
Personal Beliefs: I am a staunch athiest/agnostic who does not believe in God, especially any specific God, as a matter of looking at the data and making a decision based on the evidence, with a high degree of certainty based on how overwhelming the bayesian evidence is.
Situation that brought this question to my mind: I was talking to a friend about their belief in the Christian God (denomination unknown), and they told me about how they found God later in life, and that they were actively suicidal (cutting, etc), and that volunteering for a Christian camp helping underprivileged children helped show them how powerful belief in God was and how the hope provided is a positive force in many, especially those with the least to be hopeful about otherwise, after which this friend started believing in God and stopped being suicidal. (I believe this friend is being entirely sincere, in case my tone did not convey that)
It seems to me that while I still believe that religion as a whole has a negative average expected value to the average person, this situation seems to paint to me a picture that for many individuals, the value of religion is one that is strongly positive, and that those who are likely to recieve the most benefits are also those with the least intersection with rationality, basically the opposite of WEIRD populations. I’ve never strongly tried to change the beliefs about relgion of others, but this revelation definitely makes it harder to try to quickly defend my athiesm to others or explain any disdain for religion that accidentally slips out, and also suggests that for many individuals, trying to convince them athiesm is correct would not only be socially rude, but also not even correct if one’s goal is a consequentialist “wellbeing”.
How does one deal with the situation when one believes trying to add information to a situation will consistently make people less happy/satisfied, it feels like a sort of cognitive dissonance, would this be considered an infohazard? (All of this assuming you can successfully identify those for whom religion adds net value, and you grant my proposition that it does for them)
Also, is this a consensus stance about how others here think about interacting with people who believe in religion, or am I missing some part of the picture?
Are you asking whether, in some situations, religion can be a force of good? (I am trying to avoid complicated words here.)
Your example shows that hope, even if it is false hope in supernatural, can help people e.g. avoid suicide, which is a real effect in real world. Belief in false rewards or false punishments can similarly change real behavior.
And that’s just on individual level. Many religions come with a social structure, telling you who is your (spiritual) boss, so they can react flexibly on things that are not described in the holy books. They are, in effect, parallel governments, doing some of those things that governments do, such as providing support for the poor. They can be even more effective, because their workers believe they are under divine surveillance.
So… clearly yes.
That said, all of the effects above can work in either direction. Religion can also provide fake despair, and reward harmful actions. The commanding structure of organized religion can also be used to kill members of other religions, or to provide impunity for crimes of the religious leadership.
This is further complicated by the interaction between the religion and the surrounding society. For example, in secular countries, religion is often a force of good, because the evil actions are often illegal. (It is legal to organize charity, but not to organize burning witches or heretics.) If a religion, even a benevolent one, became much stronger, it would probably legalize all it wants do to, and suddenly the effects might be quite different. This is mostly to say that even “religion = good” would not necessarily imply “more religion = better”.
(Related part of the Sequences: Can Humanism Match Religion’s Output?)
The Sequence post Doublethink (Choosing To Be Biased) addresses the general form of this question, which is, “Is it ever optimal to adopt irrational beliefs in order to advance instrumental goals, such as happiness, wealth, etc?”
I’ll quote at length what I think is the relevant part of the post:
In other words, the trouble with wilfully blinding yourself to reality is that you don’t get to choose what you’re blinding yourself to. It’s very difficult to say, “I’m going to ignore rationality for these specific domains, and only these specific domains.” The human brain really isn’t set up like that. If you’re going to abandon rational thought in favor of religious thought, are you sure you’ll be able to stop before you’re, e.g. questioning the efficacy of vaccines?
Another way of looking at the situation is by thinking about The Litany of Gendlin:
Pretty much anything is “locally correct for consequentialists in some instances”, that’s an extremely weak statement. You can always find some possible scenario where any decision, no matter how wrong it might be ordinarily, would result in better consequences than its alternatives.
A consequentialist in general must ask themselves which decisions will lead to the best consequences in any particular situation. Deciding to believe false things, or more generally, to put more credence in a belief than it is due for some advantage other than truth-seeking, is generally disadvantageous for knowing what will have the best consequences. Of course there are some instances where the benefits might outweigh that problem, though it would be hard to tell for that same reason, and saying “this is correct in some instances” is hardly enough to conclude anything substantial(not saying you’re doing that, but I’ve seen it done so you have to be careful with that sort of reasoning)
I’m avoiding terms like “epistemic” and “consequential” and such in this answer, and instead attempting to give a colloquial one, to what I think is the spiritual question.
(I’m also deliberately avoiding iterating over the harms of blind traditionalism and religious thinking. Assuming since you’re atheist, and you don’t reject most of the criticisms of religion)
(Also also, I am being brief. For more detail I would point you at the library, to go reading on Christianity’s role for the rise of the working and uneducated classes in the 1600s-1800s, and perhaps some anthropologist’s works for more modern iterations)
Feel free to delete/downvote if this is unwanted.
It’s hard to say “all religion is bad”, when, without Christianity, when, for ex, Gregor Mendez’ Pea studies might have come about a decade+ later. In absentia of strong institutions, Christian religion often provided structure and basic education where there was none. Long before the government began to provide schooling and basic education.
Sect leaders needed you to know how to read to read the bible, and would often teach you how to write as well. Due to this, it’s hard to refute the usefulness of Christianity as an easy means of cultural through-line, staying culturally updated and locally-connected/invested in the people around their constituents.
Because the various sects of Christianity benefited greatly when their local populace was well-read and understood the bible. Religious leaders and pastors etc were incentivized to educate and build up the people around them. Whatever one might think about said leaders etc being unethical, they did provide a service, and they often encouraged and taught people skills or information they did not have before, because they were naturally invested in the local communities.
Their constituents being more wealthy and happier and having more connections and more well-socialized meant they were more able to coordinate. If you confess your concerns to your pastor, as coordination-problem-overcomers, they would often get you in contact with people in your local area who have the means and ability to help you with your problem- from rebuilding a burned-down barn, to putting in a wheelchair ramp for disabled people in trailer parks.
That is… I have no qualms with: “if it feels good, and doesn’t harm others or impinge on their rights, it’s okay to do it, with caveats*.”
When the platonic ideal of the communal Christian Fellowship operates, it is well worth the time and energy spent. One need only listen to the song being sung, to tell if it is from Eru Illuvitar, or Morgoth’s discord.
The question seems to be along the lines of “can there be consequentialist reasons for a person to adopt beliefs they know to be irrational?”
I make the assumption that the person actually can choose to do so, because this does seem to be possible for at least some people. I am not sure that it is possible for all people, and if it’s not possible for a given person then it is definitely not worth attempting from a consequentialist point of view. Attempting and failing may be even worse than not attempting at all.
The biggest danger seems to be that it will make you less sane. However, if you know your mind is already malfunctioning and in danger of destroying itself, then perhaps becoming even more irrational but less self-destructive might be beneficial to you.
That last part “to you” is a key qualifier though—it may harm others for you to do so. There are plenty examples of harm done to others near them when a person starts to hold irrational beliefs, in addition to the general sanity waterline harm from increasing the proportion of people who hold irrational beliefs.
But sure, I can see that there might be some cases where the consequences do turn out to be on net positive. I still would absolutely not recommend it in general.
Cases where information—or even from a consequentialist stance, material action—would evoke stress on the recipient can benefit from consideration of time horizon, or the duration within which the outcome is to be optimised. Indeed, the term “locally beneficial” could refer to both short time horizon, as well as short space horizon. Therefore, the question could be reframed as, “what is the optimal time/space horizon when presenting worldview challenging information?”.
Often times, the answer might be seen as a matter of personality or lifestyle—though of course there are cases, such as near expected time of death, where the time available for cognitive correction and later benefit is limited. Personally, I prefer longer time horizons and larger space horizons, particularly since even if a person should die before “return on investment” is met, the holding of accurate or inaccurate beliefs usually has knock-on effects on others. This applies to both the propagation of those beliefs, as well as for example effects of choices in voting or interpersonal judgement.
Many old and even some new religious beliefs appear to be essentially rationalisations for justifying gut instincts and or short-chain moral reasoning. By “short-chain”, I mean short in terms of causal chain inference or time/space horizon, -- as for example blaming the most proximal cause for an event, rather than seeking deeper understanding of contributing and precipitating factors, or for another example, being too quick to judge the effects of a proposed or recently enacted policy, such as increased educational spending.
One might even surmise that many religious folk are religious precisely because they have a tendency to use short time/space horizons in their inference of cause-and-effect. For example, LGBT+ may be seen as “bad” because in the shorter term it may result in fewer newborn children, yet in the longer term it may result in healthier families, better technology, and more balanced resource utilisation. A similar argument could be made about crime prevention, where traditional Western religion generally blames the immediately identifiable party while ignoring the social/systemic factors that promote poverty and crime. The result is often playing a game of “Whac-A-Mole”, where the crimes keep popping up while the “unseen” (read: unconsidered, long-chain) hand keeps making new “moles”.
In conclusion, choosing short time/space horizons may help to maintain short-term, short-distance comfort, but doing so would in many cases neglect life and wellbeing outside of that myopic circle.