The word belief seems to be used in two different contexts.
1) “I believe that the apple is red.” This implies the apple is red. It uses belief in the encyclopaedic sense.
2) “I believe in anthropogenic global warming.” This means that the person thinks that global warming should be taken seriously in plans for the future. It could mean anything from a high probability and low consequences and low probability and high consequences.
I think that this is dangerous from a sanity point of view as they can easily get confused. I can’t think of a good word for the second meaning, perhaps matters? So you would say that AGW matters, rather than I believe in AGW.
I’m not sure what you mean by 1). The truth-value of that sentence is independent from the truth-value of the clause “the apple is red”. If the sentence is true, this does seem to entail that the person speaking it assigns a probability to the apple being red, but even in that case I see no reason to set the threshold at 1⁄2, such that the sentence is logically equivalent to “I think this apple more likely to be red than not”.
As for 2), one might say “I worry about AGW”, and you can worry about risks to which you assign far less that 1⁄2 probability, and still take action on the basis of such assessments.
“I believe in AGW” seems to have a different meaning altogether: it means roughly “I believe that the scientific case for AGW is basically sound”. To say “I am an AGW skeptic” (putatively the complement of the belief statement) is to say “I believe that the AGW theory lacks a sound scientific grounding”. The entailments of these statements have more to do with trust in scientific, political and economic institutions than they have to do with the facts of the matter: few people professing either belief have much direct knowledge of the relevant physical facts and theoretical insights.
Belief is the psychological state in which an individual holds a proposition or premise to be true.
Holding something to be true means you can do deductive inference on it and ignore the cases where it is untrue.
So rather than flipping between being a skeptic and a believer, you can hold the position that you don’t know, but assign a probability to the proposition. It means you have to consider the consequences of what happens in both situations. In this state you should try and convince people to share your level of uncertainty and seek to reduce your uncertainty, rather than arguing purely pro or con.
As for 2), one might say “I worry about AGW”, and you can worry about risks to which you assign far less that 1⁄2 probability, and still take action on the basis of such assessments.
This I am perfectly happy with! I would love for people to be able to say that they don’t believe the blue box holds the diamond, but still pick it anyway. I’m objecting to the devaluing of the word belief.
Holding something to be true means you can do deductive inference on it and ignore the cases where it is untrue.
Again, I am uneasy with the mixing of “belief” language an “probability” language in one and the same sentence, perhaps because I’m only recently delving into probability theory and the two are not yet well integrated.
If probabilities are quantities between 0 and 1, but never either of these extremes—such that everything is a “level of uncertainty”—we can never use “belief” in the sense of ignoring or rejecting the complement entirely. (And even though: for things I’m really sure of, I tend to use “I know” rather than “I believe”.)
Based on an incomplete examination of my own writing (email archives of the past year or so), I use the phrase “I believe” to express conclusions I have provisionally reached but that new evidence could invalidate. (I use the phrase “I don’t believe” much more rarely, and so I can say that in none of my emails for the past ten years does the phrase appear with the meaning “I don’t believe in X”. Of course, per my earlier statement about modals, I don’t expect that my beliefs will actually be expressed using the phrase “I believe”.)
In that sense, “I believe” does have to do with deductions; my beliefs are the conclusions I judge to be plausible enough based on the evidence that they are worth following through to further conclusions. If I believe Brian will be at the conference in August, then I may email Brian to plan a meeting at the conference. If I believe probability theory is a useful tool, I therefore believe I should learn more about probability theory.
Do you normally think through the consequences of sending an email to Brian if he doesn’t happen to be going to the conference?
More than you might think. Warning: irrelevancies ahead.
I’m an introvert, i.e. I tend to only reach out to people after some agonizing over whether it’s the right thing to do. My default attitude is to clam up. Sometimes it’s hellish: I pass someone on the street, I smile and say hi, they fail to acknowledge me, and I spend the rest of the day in a blue funk over either a) what some stranger is thinking of me now or b) having suffered rejection.
Yes, it’s a fucked-up way to be. You learn to adjust. :)
But you get the point I was trying to make? A more extreme example is you don’t think through the possibility that turning on your bathroom tap might cause a negative singularity (due to it causing a unusual mixture of bacteria that have sex and form a self-aware gene network capable of creating novel regulatory pathways and recursive self improvement).
That possibility doesn’t cross your mind, and probably shouldn’t when making decisions about turning on taps. You X that turning on your tap won’t cause a singularity. I want a word for X. Believe seems to be tainted.
“I’m confident” seems a good antonym of “I worry”. I’m confident that turning on the tap is safe for humanity. I have reasonable expectations of getting water when I turn on the tap (though these are sometimes violated).
The tap example is reminiscent of the blue tentacle line in Technical Explanation. Riffing on that, beliefs perhaps correspond to scenarios you construct without really thinking about it, plausible extrapolations from actual knowledge.
The question was whether you could say that you believed in something that you thought occurred with less than 0.5 probability. If you thought you would crash your car with less than 0.5 probability, you may still wear a seatbelt, but you wouldn’t say that you believed you would crash your car.
Above you wrote:
We now have a situation where p(blue) < .5 but where you nevertheless “believe that” the diamond is in the blue box, insofar as that is the box you’d pick.
In your three-box example, if you believed the diamond was in the blue box with p=0.4, and the other two with p=0.3 each, it would sound very strange to me to say “I believe the diamond is in the blue box.” Instead, I would say “I think it’s most likely that the diamond is in the blue box.”
The use of the word “believe” doesn’t correspond to a single probability level, and as such, isn’t very Bayesian. For instance, say there is a lottery with one million tickets, and you have one ticket. Do you believe you will not win? No, and that seems true no matter how many tickets the lottery has.
Essentially, use of the word “believe” indicates that you’re talking about an axiom, a statement that you’re using as a further assumption without questioning or looking at the probabilities.
Another way to see it: consider the case where an examiner asks you to choose between three boxes (red, green, blue) one of which contains a diamond.
Normally you would assign 1⁄3 probability to each box of containing the diamond, but you have cleverly slipped a priming cue while talking earlier with the examiner, saying “I love blue skies”, and your experience with priming is such that your probability assignments are now 1/3+epsilon for blue, and 1/3-2*epsilon for each of red and green.
We now have a situation where p(blue) < .5 but where you nevertheless “believe that” the diamond is in the blue box, insofar as that is the box you’d pick.
The verb “believe” expresses what linguists call a modality. I think we should be careful when mixing in the same sentence statements of probability and everyday language modalities, because they belong to different levels of abstraction. (Disclaimer: I am no linguist and claim only basic familiarity with concepts like modalities and the pragmatics of language. But they do seem like awesome tools, that I should learn more about just as I should learn more about probability theory.)
I wouldn’t assume that “I believe X” is the same as “it is more likely than not that X is true”. “I believe” is called an epistemic modality, one that expresses a state of knowledge. An interesting property of epistemic modalities is that they usually weaken whatever statement they are associated with. For instance, “The cat is on the mat” is a nice definite statement. If you told someone, “I believe the cat is on the mat”, they might ask “Oh, but are you sure?”. Paradoxically, even “I’m sure the cat is on the mat” would be taken as expressing less confidence than “The cat is on the mat” without a modal part.
Bruno Latour is fond of pointing out that the construction of scientific knowledge largely involves stripping modalities. You go from “Kahneman (1982) suggests that X”, to “Kahneman has shown that X”, to “It is well established that X”, to “Since X”, and finally you don’t even mention X any longer, it has merged into background knowledge.
You go from “Kahneman (1982) suggests that X”, to “Kahneman has shown that X”, to “It is well established that X”, to “Since X”, and finally you don’t even mention X any longer, it has merged into background knowledge.
Unfortunately, this process can occur even in the absence of any attempt to obtain evidence for X. Sometimes just by accident.
Depends on the consequences of belief; in particular the price you’d pay for mistaken belief, vs reward of correctness.
How does it depend? Can I expect you to assign a P(X)<0.5 but still state you believe in X if the payoffs of X are right?
Sure. I do not believe that I’d crash my car if I drove today. But I’d still wear a seatbelt.
The word belief seems to be used in two different contexts.
1) “I believe that the apple is red.” This implies the apple is red. It uses belief in the encyclopaedic sense.
2) “I believe in anthropogenic global warming.” This means that the person thinks that global warming should be taken seriously in plans for the future. It could mean anything from a high probability and low consequences and low probability and high consequences.
I think that this is dangerous from a sanity point of view as they can easily get confused. I can’t think of a good word for the second meaning, perhaps matters? So you would say that AGW matters, rather than I believe in AGW.
I’m not sure what you mean by 1). The truth-value of that sentence is independent from the truth-value of the clause “the apple is red”. If the sentence is true, this does seem to entail that the person speaking it assigns a probability to the apple being red, but even in that case I see no reason to set the threshold at 1⁄2, such that the sentence is logically equivalent to “I think this apple more likely to be red than not”.
As for 2), one might say “I worry about AGW”, and you can worry about risks to which you assign far less that 1⁄2 probability, and still take action on the basis of such assessments.
“I believe in AGW” seems to have a different meaning altogether: it means roughly “I believe that the scientific case for AGW is basically sound”. To say “I am an AGW skeptic” (putatively the complement of the belief statement) is to say “I believe that the AGW theory lacks a sound scientific grounding”. The entailments of these statements have more to do with trust in scientific, political and economic institutions than they have to do with the facts of the matter: few people professing either belief have much direct knowledge of the relevant physical facts and theoretical insights.
From the wikipedia link
Holding something to be true means you can do deductive inference on it and ignore the cases where it is untrue.
So rather than flipping between being a skeptic and a believer, you can hold the position that you don’t know, but assign a probability to the proposition. It means you have to consider the consequences of what happens in both situations. In this state you should try and convince people to share your level of uncertainty and seek to reduce your uncertainty, rather than arguing purely pro or con.
This I am perfectly happy with! I would love for people to be able to say that they don’t believe the blue box holds the diamond, but still pick it anyway. I’m objecting to the devaluing of the word belief.
I’m finding this exchange quite interesting.
Again, I am uneasy with the mixing of “belief” language an “probability” language in one and the same sentence, perhaps because I’m only recently delving into probability theory and the two are not yet well integrated.
If probabilities are quantities between 0 and 1, but never either of these extremes—such that everything is a “level of uncertainty”—we can never use “belief” in the sense of ignoring or rejecting the complement entirely. (And even though: for things I’m really sure of, I tend to use “I know” rather than “I believe”.)
Based on an incomplete examination of my own writing (email archives of the past year or so), I use the phrase “I believe” to express conclusions I have provisionally reached but that new evidence could invalidate. (I use the phrase “I don’t believe” much more rarely, and so I can say that in none of my emails for the past ten years does the phrase appear with the meaning “I don’t believe in X”. Of course, per my earlier statement about modals, I don’t expect that my beliefs will actually be expressed using the phrase “I believe”.)
In that sense, “I believe” does have to do with deductions; my beliefs are the conclusions I judge to be plausible enough based on the evidence that they are worth following through to further conclusions. If I believe Brian will be at the conference in August, then I may email Brian to plan a meeting at the conference. If I believe probability theory is a useful tool, I therefore believe I should learn more about probability theory.
I’m starting to feel like we should probably chuck out our current language for dealing with states of knowledge and start over.
For example how does
mean we should be assessing the pay-offs of various actions? How much evidence should we expect to tip the scales in the other direction?
Do you normally think through the consequences of sending an email to Brian if he doesn’t happen to be going to the conference?
More than you might think. Warning: irrelevancies ahead.
I’m an introvert, i.e. I tend to only reach out to people after some agonizing over whether it’s the right thing to do. My default attitude is to clam up. Sometimes it’s hellish: I pass someone on the street, I smile and say hi, they fail to acknowledge me, and I spend the rest of the day in a blue funk over either a) what some stranger is thinking of me now or b) having suffered rejection.
Yes, it’s a fucked-up way to be. You learn to adjust. :)
But you get the point I was trying to make? A more extreme example is you don’t think through the possibility that turning on your bathroom tap might cause a negative singularity (due to it causing a unusual mixture of bacteria that have sex and form a self-aware gene network capable of creating novel regulatory pathways and recursive self improvement).
That possibility doesn’t cross your mind, and probably shouldn’t when making decisions about turning on taps. You X that turning on your tap won’t cause a singularity. I want a word for X. Believe seems to be tainted.
Yes, I think I get your point.
“I’m confident” seems a good antonym of “I worry”. I’m confident that turning on the tap is safe for humanity. I have reasonable expectations of getting water when I turn on the tap (though these are sometimes violated).
The tap example is reminiscent of the blue tentacle line in Technical Explanation. Riffing on that, beliefs perhaps correspond to scenarios you construct without really thinking about it, plausible extrapolations from actual knowledge.
The question was whether you could say that you believed in something that you thought occurred with less than 0.5 probability. If you thought you would crash your car with less than 0.5 probability, you may still wear a seatbelt, but you wouldn’t say that you believed you would crash your car.
Above you wrote:
In your three-box example, if you believed the diamond was in the blue box with p=0.4, and the other two with p=0.3 each, it would sound very strange to me to say “I believe the diamond is in the blue box.” Instead, I would say “I think it’s most likely that the diamond is in the blue box.”
The use of the word “believe” doesn’t correspond to a single probability level, and as such, isn’t very Bayesian. For instance, say there is a lottery with one million tickets, and you have one ticket. Do you believe you will not win? No, and that seems true no matter how many tickets the lottery has.
Essentially, use of the word “believe” indicates that you’re talking about an axiom, a statement that you’re using as a further assumption without questioning or looking at the probabilities.
Another way to see it: consider the case where an examiner asks you to choose between three boxes (red, green, blue) one of which contains a diamond.
Normally you would assign 1⁄3 probability to each box of containing the diamond, but you have cleverly slipped a priming cue while talking earlier with the examiner, saying “I love blue skies”, and your experience with priming is such that your probability assignments are now 1/3+epsilon for blue, and 1/3-2*epsilon for each of red and green.
We now have a situation where p(blue) < .5 but where you nevertheless “believe that” the diamond is in the blue box, insofar as that is the box you’d pick.
The verb “believe” expresses what linguists call a modality. I think we should be careful when mixing in the same sentence statements of probability and everyday language modalities, because they belong to different levels of abstraction. (Disclaimer: I am no linguist and claim only basic familiarity with concepts like modalities and the pragmatics of language. But they do seem like awesome tools, that I should learn more about just as I should learn more about probability theory.)
I wouldn’t assume that “I believe X” is the same as “it is more likely than not that X is true”. “I believe” is called an epistemic modality, one that expresses a state of knowledge. An interesting property of epistemic modalities is that they usually weaken whatever statement they are associated with. For instance, “The cat is on the mat” is a nice definite statement. If you told someone, “I believe the cat is on the mat”, they might ask “Oh, but are you sure?”. Paradoxically, even “I’m sure the cat is on the mat” would be taken as expressing less confidence than “The cat is on the mat” without a modal part.
Bruno Latour is fond of pointing out that the construction of scientific knowledge largely involves stripping modalities. You go from “Kahneman (1982) suggests that X”, to “Kahneman has shown that X”, to “It is well established that X”, to “Since X”, and finally you don’t even mention X any longer, it has merged into background knowledge.
Unfortunately, this process can occur even in the absence of any attempt to obtain evidence for X. Sometimes just by accident.