Let’s say I make six predictions or statements that I believe to be true about someone I’ve never met and I say the statements taken as a whole are true with P = 0.7. Note that I do not claim to be psychic.
The P of each statement must then lie between 0.7 and 1.0, and if they are equal then the P of each statement is 0.7 ^ (1/6) = 0.94. Let’s say 0.9 because I doubt any statement about this type of probability should be reported with two significant figures, and perhaps even one significant figure without an attached tolerance band is a bit of a stretch.
I’d say that a P this high for each statement, given this example, is well nigh impossible.
Agreed?
Maybe I’m not so underqualified as to be unable to enjoy this forum.
You mean that all the statements are true—right? You’re evaluating “a AND b AND c AND d AND e AND f”?
The P of each statement must then lie between 0.7 and 1.0
Correct.
if they are equal then the P of each statement is 0.7 ^ (1/6) = 0.94
You are assuming the statements are independent of each other. That’s not necessarily so.
To take an extreme example, all six statements could be a function of the same single property/event. In such a case the P of each is 0.7 and the P of all of them is still 0.7.
I’d say that a P this high for each statement, given this example, is well nigh impossible.
I can state with P=.94 (much higher) that you know how to read English. Is that an impossible level of certainty?
The real question isn’t probability assigned, but prior probability distribution, and evidence. You’re on Less Wrong—I’ve yet to meet somebody on Less Wrong, out of hundreds of conversations, who can’t speak English, so I have a prior much higher than .94 starting off. (I don’t care to calculate it, since I don’t even know the exact sample size, but somewhere in the vicinity of .99) I have evidence that you read and write English, pushing the prior slightly higher.
Somebody could throw this statement into a list of several others about any given Less Wrongian without influencing the overall probabilities.
It’s the relationship of the statements to their prior probabilities that matters.
Let’s say I make six predictions or statements that I believe to be true about someone I’ve never met and I say the statements taken as a whole are true with P = 0.7. Note that I do not claim to be psychic.
The P of each statement must then lie between 0.7 and 1.0, and if they are equal then the P of each statement is 0.7 ^ (1/6) = 0.94. Let’s say 0.9 because I doubt any statement about this type of probability should be reported with two significant figures, and perhaps even one significant figure without an attached tolerance band is a bit of a stretch.
I’d say that a P this high for each statement, given this example, is well nigh impossible.
Agreed?
Maybe I’m not so underqualified as to be unable to enjoy this forum.
You mean that all the statements are true—right? You’re evaluating “a AND b AND c AND d AND e AND f”?
Correct.
You are assuming the statements are independent of each other. That’s not necessarily so.
To take an extreme example, all six statements could be a function of the same single property/event. In such a case the P of each is 0.7 and the P of all of them is still 0.7.
I can state with P=.94 (much higher) that you know how to read English. Is that an impossible level of certainty?
The real question isn’t probability assigned, but prior probability distribution, and evidence. You’re on Less Wrong—I’ve yet to meet somebody on Less Wrong, out of hundreds of conversations, who can’t speak English, so I have a prior much higher than .94 starting off. (I don’t care to calculate it, since I don’t even know the exact sample size, but somewhere in the vicinity of .99) I have evidence that you read and write English, pushing the prior slightly higher.
Somebody could throw this statement into a list of several others about any given Less Wrongian without influencing the overall probabilities.
It’s the relationship of the statements to their prior probabilities that matters.
Everything Lumifer said, plus:
P(A), P(B) >= P(A /\ B)
and using log-odds allows for some finer psychological control over tiny value of probability (see Jaynes book).