Think back for a second to your pre-bayesian days. Think back to the time before your exposure to the sequences. Now the question is, what estimate would you have given that any chain of arguments could persuade you the statements above are true? In my case, it would be near zero.
p(convincing argument exists) >= (Number of Bayesians post-sequence—Number of Bayesians pre-sequence) / Number of people reading sequence.
Or, in simpler terms, “the sequences are a convincing chain of argument, and they are effective.” I’ll admit I’m working with a smart group of people, but none of my friends have had trouble with any of the inferential steps in the sequences (if I jump in to the advanced stuff, I’ll still lose them just fine thanks to the miracle of inferential distances, obviously, and I haven’t convinced all my friends of all those points yet :))
I assume that people in their pre-bayesian days aren’t even aware of the existence of the sequences so I don’t think they can use that to calculate their estimate. What I meant to get at is that it’s easy to be really certain a belief is false if it it’s intuitively wrong (but not wrong in reality) and the inferential distance is large. I think it’s a general bias that people are disproportionately certain about beliefs at large inferential distances, but I don’t think that bias has a name.
(Not to mention that people are really bad at estimating inferential distance in the first place!)
p(convincing argument exists) >= (Number of Bayesians post-sequence—Number of Bayesians pre-sequence) / Number of people reading sequence.
Or, in simpler terms, “the sequences are a convincing chain of argument, and they are effective.” I’ll admit I’m working with a smart group of people, but none of my friends have had trouble with any of the inferential steps in the sequences (if I jump in to the advanced stuff, I’ll still lose them just fine thanks to the miracle of inferential distances, obviously, and I haven’t convinced all my friends of all those points yet :))
I assume that people in their pre-bayesian days aren’t even aware of the existence of the sequences so I don’t think they can use that to calculate their estimate. What I meant to get at is that it’s easy to be really certain a belief is false if it it’s intuitively wrong (but not wrong in reality) and the inferential distance is large. I think it’s a general bias that people are disproportionately certain about beliefs at large inferential distances, but I don’t think that bias has a name.
(Not to mention that people are really bad at estimating inferential distance in the first place!)