People being religious is some evidence that religion is true. Aside from drethelin’s point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
To pick an easy example, I don’t think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization’s recommendations?
I don’t think anyone thinks a Catholic priest can turn wine into blood on command.
Neither do Catholics think their priests turn wine into actual blood. After all, they’re able to see and taste it as wine afterwards! Instead they’re dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don’t come true, and so it’s more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
I really don’t think that the vast majority of Catholics bother forming a position regarding transubstantiation. One of the major benefits of joining a religion is letting other people think for you.
Aside from drethelin’s point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
I don’t think it’s fair to say that no one of the practical predictions of religion holds up to rigorous examination.
In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.
Buddhist meditation is also a practice that has a lot of backing in rigorous examination.
On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.
Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get’s really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.
Alcoholics Anonymous is famously ineffective, but separate from that: What’s your point here? Being a christian is not the same as subjecting christian practices to rigorous examination to test for effectiveness. The question the original asker asked about was not ‘Does religion have any worth’ but ’Should I become a practicing christian to avoid burning in hell for eternity”
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox’s roommate’s death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun. In fact, there are some hypotheses for which belief is evidence against. For instance, if someone believes in a conspiracy theory, that’s evidence against the conspiracy theory; in a world in which a set of events X occurs, but no conspiracy is behind it, people would be free to develop conspiracy theories regarding X. But in a world in which X occurs, and a conspiracy is behind it, it likely that the conspiracy will interfere with the formation of any conspiracy theory.
Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun.
Bad example. In fact, the example you give is sufficient to require that your contention be modified (or rejected as is).
While it is not the case that there is a tea kettle orbiting the sun (except on earth) there is a mechanism by which people can assign various degrees of probability to that hypothesis, including probabilities high enough to constitute ‘belief’. This is the case even if the existence of such a kettle is assumed to have not caused the kettle belief. Instead, if observations about how physics works and our apparent place within it were such that kettles are highly likely to exist orbiting suns like ours then I would believe that there is a kettle orbiting the sun.
It so happens that it is crazy to believe in space kettles that we haven’t seen. This isn’t because we haven’t seen them—we wouldn’t expect to see them either way. This is because they (probably) don’t exist (based on all our observations of physics). If our experiments suggested a different (perhaps less reducible) physics then it would be correct to believe in space kettles despite there being no way for the space kettle to have caused the belief.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Yes, but this is different from a generic “People being religious is some evidence that religion is true.”
P(religion is true | overwhelming professing of belief) > P(religion is true | absence of overwhelming professing of belief).
In other words, I think my two formulations are isomorphic. If we define evidence such that absence of evidence is evidence of absence, then one implication is that it is possible for some evidence to exist in favor of false propositions.
it is possible for some evidence to exist in favor of false propositions.
This is possible with any definition of evidence. Every bit of information you receive makes you discard some theories which have been disproven, so it’s evidence in favour of each of the ones you don’t discard. But only one of those is fully true; the others are false.
I’m smart. They’re not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn’t at all cause rationality, in my experience judging others it’s so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don’t make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can’t seem smart, aren’t.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of.
That value doesn’t directly lead to having a belief system where individual beliefs can be used to make accurate predictions.
For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi.
Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery.
There were plenty of self professed rationalists who didn’t believe in any heat development in meditation because the people in the meditation don’t shiver.
The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don’t have the epistemic humility that allows for the insight that there are important things that they just don’t understand.
To quote Nassim Taleb:
“It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own.”
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
Interesting analogy between “best path / MAP (viterbi)” :: “integral over all paths / expectation” as “consistent” :: “some other type of thinking/ not consistent?” I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
I’m not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.
I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
There are cases where the forward backward algorithm gives you a path that’s impossible to happen. I would call those paths inconsistent.
That’s one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.
A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.
I understand forward-backward (in general) pretty well and am not sure what application you’re thinking of or what you mean by “a path that’s impossible to happen”. Anyway, yes, I agree that you shouldn’t usually put 0 plausibility on views other than your current best guess.
People being religious is some evidence that religion is true. Aside from drethelin’s point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.
To pick an easy example, I don’t think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization’s recommendations?
Neither do Catholics think their priests turn wine into actual blood. After all, they’re able to see and taste it as wine afterwards! Instead they’re dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don’t come true, and so it’s more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
I really don’t think that the vast majority of Catholics bother forming a position regarding transubstantiation. One of the major benefits of joining a religion is letting other people think for you.
This is probably true, but the discussion was about religion (i.e. official dogma) making predictions. Lots of holes can be picked in that, of course.
I don’t think it’s fair to say that no one of the practical predictions of religion holds up to rigorous examination. In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.
Buddhist meditation is also a practice that has a lot of backing in rigorous examination.
On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.
Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get’s really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.
Alcoholics Anonymous is famously ineffective, but separate from that: What’s your point here? Being a christian is not the same as subjecting christian practices to rigorous examination to test for effectiveness. The question the original asker asked about was not ‘Does religion have any worth’ but ’Should I become a practicing christian to avoid burning in hell for eternity”
To me it is only evidence that people are irrational.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox’s roommate’s death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun. In fact, there are some hypotheses for which belief is evidence against. For instance, if someone believes in a conspiracy theory, that’s evidence against the conspiracy theory; in a world in which a set of events X occurs, but no conspiracy is behind it, people would be free to develop conspiracy theories regarding X. But in a world in which X occurs, and a conspiracy is behind it, it likely that the conspiracy will interfere with the formation of any conspiracy theory.
Bad example. In fact, the example you give is sufficient to require that your contention be modified (or rejected as is).
While it is not the case that there is a tea kettle orbiting the sun (except on earth) there is a mechanism by which people can assign various degrees of probability to that hypothesis, including probabilities high enough to constitute ‘belief’. This is the case even if the existence of such a kettle is assumed to have not caused the kettle belief. Instead, if observations about how physics works and our apparent place within it were such that kettles are highly likely to exist orbiting suns like ours then I would believe that there is a kettle orbiting the sun.
It so happens that it is crazy to believe in space kettles that we haven’t seen. This isn’t because we haven’t seen them—we wouldn’t expect to see them either way. This is because they (probably) don’t exist (based on all our observations of physics). If our experiments suggested a different (perhaps less reducible) physics then it would be correct to believe in space kettles despite there being no way for the space kettle to have caused the belief.
Yes, but this is different from a generic “People being religious is some evidence that religion is true.”
P(religion is true | overwhelming professing of belief) > P(religion is true | absence of overwhelming professing of belief).
In other words, I think my two formulations are isomorphic. If we define evidence such that absence of evidence is evidence of absence, then one implication is that it is possible for some evidence to exist in favor of false propositions.
This is possible with any definition of evidence. Every bit of information you receive makes you discard some theories which have been disproven, so it’s evidence in favour of each of the ones you don’t discard. But only one of those is fully true; the others are false.
The issue is: How do you know that you aren’t just as irrational as them?
My personal answer:
I’m smart. They’re not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn’t at all cause rationality, in my experience judging others it’s so correlated as to nearly be a prerequisite.
I care a lot (but not too much) about consistency under the best / most rational reflection I’m capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don’t make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can’t seem smart, aren’t.
I’m winning more than they are.
That value doesn’t directly lead to having a belief system where individual beliefs can be used to make accurate predictions. For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi. Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.
If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.
One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery. There were plenty of self professed rationalists who didn’t believe in any heat development in meditation because the people in the meditation don’t shiver. The search for consistency leads in examples like that to denying important empirical evidence.
It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.
People who want to signal socially that they know-it-all don’t have the epistemic humility that allows for the insight that there are important things that they just don’t understand.
To quote Nassim Taleb: “It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own.”
For the record, I’m not a member of any religion.
I’m pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there’s no time to explain why, for example).
Interesting analogy between “best path / MAP (viterbi)” :: “integral over all paths / expectation” as “consistent” :: “some other type of thinking/ not consistent?” I don’t see what “integral over many possibilities” has to do with consistency, except that it’s sometimes the correct (but more expensive) thing to do.
I’m not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.
There are cases where the forward backward algorithm gives you a path that’s impossible to happen. I would call those paths inconsistent.
That’s one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.
A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.
I understand forward-backward (in general) pretty well and am not sure what application you’re thinking of or what you mean by “a path that’s impossible to happen”. Anyway, yes, I agree that you shouldn’t usually put 0 plausibility on views other than your current best guess.
It possible that you p=0 to go from 5:A to 6:B and the path created by forward-backward still goes from 5:A to 6:B.