Are people around here more likely to agree with true propositions than false ones? This might be true in general,
I was generalising from the above. I expect the epistemic hygiene on LW to be significantly higher than the norm.
For any belief b, let Pr(b) be the probability that b is true. Forall b such that b is a consensus on Lesswrong (greater than some k% of Lesswrongers believe b), then Pr(b) > 0.50 is a belief I hold.
But this is an entirely unwarranted generalization!
Broad concepts like “the epistemic hygiene on LW [is] significantly higher than the norm” simply don’t suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise, nor that LessWrongers have any kind of healthy respect for expertise—especially since, in the latter case, we know that they in fact do not.
simply don’t suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise
Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P ⇐ 0.5? As long as Pr(B|lesswrong consensus) is > 0.5, then Lesswrong consensus remains Bayesian evidence for truth.
I expect that for most domains (possibly all), Lesswrong consensus is more likely to be right than wrong. I haven’t yet seen reason to believe otherwise; (it seems you have?).
Again, there is nothing special about this. Given that I believe something, even without any consensus at all, I think my belief is more likely to be true than false. I expect this to apply to all domains, even ones that I have not studied. If I thought it did not apply to some domains, then I should reverse all of my beliefs about that domain, and then I would expect it to apply.
The LessWrong consensus is massively overweighted in one particular field of expertise (computing) with some marginal commentators who happen to do other things.
As for evidence to believe otherwise, how about all of recorded human history? When has there ever been a group whose consensus was more likely to be right than wrong in all domains of human endeavor? What a ludicrous hubris, the sheer arrogance on display in this comment cowed me, I briefly considered whether I’m hanging out in the right place by posting here.
Let B be the set of beliefs that are consensus among the LW community. Let b be any arbitrary belief. Let Pr(b) be the probability that b is true. Let (b|B) denote the event that b is a member of B.
I argue that Pr(b|B) (Probability that b is true given that b is a member of B) is greater than 0.5; how is that hubris?
If Lesswrongers are ignorant on a particular field, then I don’t expect a consensus to form. Sure, we may have some wrong beliefs that are consensus, but the fraction of right beliefs that are consensus is greater than 1⁄2 (of total beliefs that are consensus).
This entire thread is reason to believe otherwise. We have the LessWrong consensus (sans-serif fonts are easier to read than serif fonts). We have a domain expert posting evidence to the contrary. And we have LessWrong continuing with its priors, because consensus trumps expertise.
I’m not continuing with my priors for one (where do you get that Lesswrong is continuing with its priors?).
It is not clear to me that “serif fonts are easier to read than sans-serif fonts” was ever/is a consensus here. As far as I can tell, fewer than ten people expressed that opinion (and 10 is a very small sample).
1 example (if this was that) wouldn’t detract from my point though. My point is that lesswrong consensus is better than random guessing.
I was generalising from the above. I expect the epistemic hygiene on LW to be significantly higher than the norm.
For any belief b, let Pr(b) be the probability that b is true. Forall b such that b is a consensus on Lesswrong (greater than some k% of Lesswrongers believe b), then Pr(b) > 0.50 is a belief I hold.
But this is an entirely unwarranted generalization!
Broad concepts like “the epistemic hygiene on LW [is] significantly higher than the norm” simply don’t suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise, nor that LessWrongers have any kind of healthy respect for expertise—especially since, in the latter case, we know that they in fact do not.
Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P ⇐ 0.5?
As long as Pr(B|lesswrong consensus) is > 0.5, then Lesswrong consensus remains Bayesian evidence for truth.
For some domains, sure. For others, not.
We have no real reason to expect any particular likelihood ratio here, so should probably default to P = 0.5.
I expect that for most domains (possibly all), Lesswrong consensus is more likely to be right than wrong. I haven’t yet seen reason to believe otherwise; (it seems you have?).
Again, there is nothing special about this. Given that I believe something, even without any consensus at all, I think my belief is more likely to be true than false. I expect this to apply to all domains, even ones that I have not studied. If I thought it did not apply to some domains, then I should reverse all of my beliefs about that domain, and then I would expect it to apply.
I never suggested that there was anything extraordinary about my claim (au contraire, it was quite intuitive) I do not think we disagree.
Just so we’re clear here:
Profession (Results from 2016 LessWrong Survey)
Art: +0.800% 51 2.300%
Biology: +0.300% 49 2.200%
Business: −0.800% 72 3.200%
Computers (AI): +0.700% 79 3.500%
Computers (other academic, computer science): −0.100% 156 7.000%
Computers (practical): −1.200% 681 30.500%
Engineering: +0.600% 150 6.700%
Finance / Economics: +0.500% 116 5.200%
Law: −0.300% 50 2.200%
Mathematics: −1.500% 147 6.600%
Medicine: +0.100% 49 2.200%
Neuroscience: +0.100% 28 1.300%
Philosophy: 0.000% 54 2.400%
Physics: −0.200% 91 4.100%
Psychology: 0.000% 48 2.100%
Other: +2.199% 277 12.399%
Other “hard science”: −0.500% 26 1.200%
Other “social science”: −0.200% 48 2.100%
The LessWrong consensus is massively overweighted in one particular field of expertise (computing) with some marginal commentators who happen to do other things.
As for evidence to believe otherwise, how about all of recorded human history? When has there ever been a group whose consensus was more likely to be right than wrong in all domains of human endeavor? What a ludicrous hubris, the sheer arrogance on display in this comment cowed me, I briefly considered whether I’m hanging out in the right place by posting here.
Let B be the set of beliefs that are consensus among the LW community. Let b be any arbitrary belief. Let Pr(b) be the probability that b is true. Let (b|B) denote the event that b is a member of B.
I argue that Pr(b|B) (Probability that b is true given that b is a member of B) is greater than 0.5; how is that hubris?
If Lesswrongers are ignorant on a particular field, then I don’t expect a consensus to form. Sure, we may have some wrong beliefs that are consensus, but the fraction of right beliefs that are consensus is greater than 1⁄2 (of total beliefs that are consensus).
This entire thread is reason to believe otherwise. We have the LessWrong consensus (sans-serif fonts are easier to read than serif fonts). We have a domain expert posting evidence to the contrary. And we have LessWrong continuing with its priors, because consensus trumps expertise.
I’m not continuing with my priors for one (where do you get that Lesswrong is continuing with its priors?).
It is not clear to me that “serif fonts are easier to read than sans-serif fonts” was ever/is a consensus here. As far as I can tell, fewer than ten people expressed that opinion (and 10 is a very small sample).
1 example (if this was that) wouldn’t detract from my point though. My point is that lesswrong consensus is better than random guessing.