You need to work with probabilities, and then make statements about your expected Bayes-score instead of truth or falsity; then you’ll be consistent. I have a post on this but I can’t remember what it’s called.
topynate: It was only for reasons of space that I listed five events with probability 0.8 each, rather than 1000 events with probability 0.999 each; the modification is obvious.
I think Wittgenstein’s point was that you’re using ‘believe’ in a strange way. I have no idea what you meant by the above comment; you’re effectively claiming to believe and not believe the same statement simultaneously.
If you’re using paraconsitent logic, you should really specify that before making a point, so the rest of us can more efficiently disregard it.
I judge each of the four teams to have probability 0.2 of winning the Champions League. Their victories are mutually exclusive. Hence I judge each of statements (1)-(5) to have probability 0.8.
Hm. Wittgenstein requires that the meaning be “indicative”. In English the indicative mood is used to express statements of fact, or which are very probable. They don’t necessarily have to be true or probable, of course, but they express beliefs of that nature. You say “I believe X” when you assign a probability of at least 0.8 to X; 0.8 is probable, but not very probable. Would you state baldly “Barcelona will not win the Champions League”, given your probabilities? I doubt it. When you say instead “I believe Barcelona will not win the Champions League”, you could equally say “Barcelona will probably not win the Champions League.” But this isn’t in the indicative mood, but rather in something called the potential/tentative mood, which has no special form in English, but does in some other languages, e.g. daro in Japanese (which has quite a complex system for expressing probability). It’s better to just say your degree of belief as a numeric probability.
He is illustrating that “belief” has more than one meaning, for all that he hasn’t
clarified the meanings.
A candidate theory would be belief-as-cold-hard-fact versus beliefs-as-hope-and-commitment.
Consider a politican fighting an election. Even if the polls are strongly against
them, they can’t admit that they are going to lose as a matter of fact, because
that will make the situation worse. They invariably refuse to admit defeat.
That is irrational if you treat belief as a solipsistic, pasive registration of
facts, but makes perfect sense if you recoginise that beliefs do things
in the world and influence other people. If one person commits
to something , others can, and that can lead to it becoming a fact.
Treating people as nicer than they are might make them nicer than they were.
I believe the following five things.
(1) Barcelona will not win the Champions League.
(2) Manchester U will not win the Champions League.
(3) Chelsea will not win the Champions League.
(4) Liverpool will not win the Champions League.
(5) I falsely believe one of the statements (1), (2), (3) and (4).
This seems to me like a reasonable counterexample to Wittgenstein’s doctrine.
You need to work with probabilities, and then make statements about your expected Bayes-score instead of truth or falsity; then you’ll be consistent. I have a post on this but I can’t remember what it’s called.
“Qualitatively Confused.”
topynate: It was only for reasons of space that I listed five events with probability 0.8 each, rather than 1000 events with probability 0.999 each; the modification is obvious.
Eliezer: Point taken.
I think Wittgenstein’s point was that you’re using ‘believe’ in a strange way. I have no idea what you meant by the above comment; you’re effectively claiming to believe and not believe the same statement simultaneously.
If you’re using paraconsitent logic, you should really specify that before making a point, so the rest of us can more efficiently disregard it.
I judge each of the four teams to have probability 0.2 of winning the Champions League. Their victories are mutually exclusive. Hence I judge each of statements (1)-(5) to have probability 0.8.
Hm. Wittgenstein requires that the meaning be “indicative”. In English the indicative mood is used to express statements of fact, or which are very probable. They don’t necessarily have to be true or probable, of course, but they express beliefs of that nature. You say “I believe X” when you assign a probability of at least 0.8 to X; 0.8 is probable, but not very probable. Would you state baldly “Barcelona will not win the Champions League”, given your probabilities? I doubt it. When you say instead “I believe Barcelona will not win the Champions League”, you could equally say “Barcelona will probably not win the Champions League.” But this isn’t in the indicative mood, but rather in something called the potential/tentative mood, which has no special form in English, but does in some other languages, e.g. daro in Japanese (which has quite a complex system for expressing probability). It’s better to just say your degree of belief as a numeric probability.
He is illustrating that “belief” has more than one meaning, for all that he hasn’t clarified the meanings.
A candidate theory would be belief-as-cold-hard-fact versus beliefs-as-hope-and-commitment.
Consider a politican fighting an election. Even if the polls are strongly against them, they can’t admit that they are going to lose as a matter of fact, because that will make the situation worse. They invariably refuse to admit defeat. That is irrational if you treat belief as a solipsistic, pasive registration of facts, but makes perfect sense if you recoginise that beliefs do things in the world and influence other people. If one person commits to something , others can, and that can lead to it becoming a fact.
Treating people as nicer than they are might make them nicer than they were.
Of course , if “belief” does have these two meanings, the argument against dark side epistemolgoy largely unravels...