I have no knowledge of typography, but was thought in University that serif fonts should be used for screens, and sans serif for print; it is very possible, that my lecturers were wrong.
Would that make it more true?
No.
What amount of karma and local agreement does it take to get to the truth?
None. The truth is orthogonal to the level of local agreement. That said, local agreement is Bayesian evidence for the veracity of a proposition.
Mathematically, if the truth is orthogonal to the level of local agreement, local agreement cannot constitute Bayesian evidence for the veracity of the proposition. If we’re taking local agreement as Bayesian evidence for the veracity of the proposition, we’re assuming the veracity of the proposition and local agreement are not linearly independent, which would violate orthogonality.
An outcome is Bayesian evidence for a proposition, if the outcome is more likely to occur if the proposition is true, than vice versa.
Based on that understanding of Bayesian evidence, I argue that Lesswrong consensus on a proposition is Bayesian evidence for that proposition. Lesswrongers have better than average epistemic hygiene, and pursue true beliefs. You expect the average lesswronger to have a higher percentage of true beliefs than a lay person. Furthermore if a belief is consensus among the Lesswrong community, then it is more likely to be true. (A single Lesswronger may have some false beliefs), but the set of false beliefs that would be shared by the overwhelming majority of Lesswrongers would be very small.
An outcome is Bayesian evidence for a proposition, if the outcome is more likely to occur if the proposition is true, than vice versa.
That assumes that there is a statistical correlation between the two, no? If the two are orthogonal to each other, they’re statistically uncorrelated, by definition.
The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition. To claim otherwise is to claim that Lesswrongers form their beliefs through a process that is no better than random guessing. That’s a very strong claim to make, and extraordinary claims require extraordinary evidence.
“The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition.”
Sure, and that is equally true of indefinitely many other populations in the world and the whole population as well. It would take an argument to establish that LW local agreement is better than any particular one of those populations.
Well, in that case, what I conjecture is simply that either this (your university classes) took place a while ago, or your lecturers formed their opinions a while ago and didn’t keep up with developments, or both.
“Use sans-serif fonts for screen” made sense. Once. When most people had 72ppi displays (if not lower), and no anti-aliasing, or subpixel rendering.
… local agreement is Bayesian evidence for the veracity of a proposition.
Why? Are people around here more likely to agree with true propositions than false ones? This might be true in general, but is it true in domains where there exists non-trivial expertise? That’s not obvious to me at all. What makes you think so?
Are people around here more likely to agree with true propositions than false ones? This might be true in general,
I was generalising from the above. I expect the epistemic hygiene on LW to be significantly higher than the norm.
For any belief b, let Pr(b) be the probability that b is true. Forall b such that b is a consensus on Lesswrong (greater than some k% of Lesswrongers believe b), then Pr(b) > 0.50 is a belief I hold.
But this is an entirely unwarranted generalization!
Broad concepts like “the epistemic hygiene on LW [is] significantly higher than the norm” simply don’t suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise, nor that LessWrongers have any kind of healthy respect for expertise—especially since, in the latter case, we know that they in fact do not.
simply don’t suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise
Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P ⇐ 0.5? As long as Pr(B|lesswrong consensus) is > 0.5, then Lesswrong consensus remains Bayesian evidence for truth.
I expect that for most domains (possibly all), Lesswrong consensus is more likely to be right than wrong. I haven’t yet seen reason to believe otherwise; (it seems you have?).
Again, there is nothing special about this. Given that I believe something, even without any consensus at all, I think my belief is more likely to be true than false. I expect this to apply to all domains, even ones that I have not studied. If I thought it did not apply to some domains, then I should reverse all of my beliefs about that domain, and then I would expect it to apply.
The LessWrong consensus is massively overweighted in one particular field of expertise (computing) with some marginal commentators who happen to do other things.
As for evidence to believe otherwise, how about all of recorded human history? When has there ever been a group whose consensus was more likely to be right than wrong in all domains of human endeavor? What a ludicrous hubris, the sheer arrogance on display in this comment cowed me, I briefly considered whether I’m hanging out in the right place by posting here.
Let B be the set of beliefs that are consensus among the LW community. Let b be any arbitrary belief. Let Pr(b) be the probability that b is true. Let (b|B) denote the event that b is a member of B.
I argue that Pr(b|B) (Probability that b is true given that b is a member of B) is greater than 0.5; how is that hubris?
If Lesswrongers are ignorant on a particular field, then I don’t expect a consensus to form. Sure, we may have some wrong beliefs that are consensus, but the fraction of right beliefs that are consensus is greater than 1⁄2 (of total beliefs that are consensus).
This entire thread is reason to believe otherwise. We have the LessWrong consensus (sans-serif fonts are easier to read than serif fonts). We have a domain expert posting evidence to the contrary. And we have LessWrong continuing with its priors, because consensus trumps expertise.
I’m not continuing with my priors for one (where do you get that Lesswrong is continuing with its priors?).
It is not clear to me that “serif fonts are easier to read than sans-serif fonts” was ever/is a consensus here. As far as I can tell, fewer than ten people expressed that opinion (and 10 is a very small sample).
1 example (if this was that) wouldn’t detract from my point though. My point is that lesswrong consensus is better than random guessing.
I have no knowledge of typography, but was thought in University that serif fonts should be used for screens, and sans serif for print; it is very possible, that my lecturers were wrong.
No.
None. The truth is orthogonal to the level of local agreement. That said, local agreement is Bayesian evidence for the veracity of a proposition.
Mathematically, if the truth is orthogonal to the level of local agreement, local agreement cannot constitute Bayesian evidence for the veracity of the proposition. If we’re taking local agreement as Bayesian evidence for the veracity of the proposition, we’re assuming the veracity of the proposition and local agreement are not linearly independent, which would violate orthogonality.
Either I don’t know what Bayesian evidence is, or you don’t.
My understanding is:
Based on that understanding of Bayesian evidence, I argue that Lesswrong consensus on a proposition is Bayesian evidence for that proposition. Lesswrongers have better than average epistemic hygiene, and pursue true beliefs. You expect the average lesswronger to have a higher percentage of true beliefs than a lay person. Furthermore if a belief is consensus among the Lesswrong community, then it is more likely to be true. (A single Lesswronger may have some false beliefs), but the set of false beliefs that would be shared by the overwhelming majority of Lesswrongers would be very small.
That assumes that there is a statistical correlation between the two, no? If the two are orthogonal to each other, they’re statistically uncorrelated, by definition.
http://lesswrong.com/lw/nz/arguing_by_definition/
The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition. To claim otherwise is to claim that Lesswrongers form their beliefs through a process that is no better than random guessing. That’s a very strong claim to make, and extraordinary claims require extraordinary evidence.
“The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition.”
Sure, and that is equally true of indefinitely many other populations in the world and the whole population as well. It would take an argument to establish that LW local agreement is better than any particular one of those populations.
Then we are in agreement.
As for Lesswrong vs the general population, I point to the difference in epistemic hygiene between the two groups.
They were lecturers in what subject? Design / typography / etc.? Or, some unrelated subject?
Unrelated subjects (insofar as webdesign is classified as unrelated).
Well, in that case, what I conjecture is simply that either this (your university classes) took place a while ago, or your lecturers formed their opinions a while ago and didn’t keep up with developments, or both.
“Use sans-serif fonts for screen” made sense. Once. When most people had 72ppi displays (if not lower), and no anti-aliasing, or subpixel rendering.
None of that has been true for many, many years.
I am currently in my fourth year.
I have expressed this sentiment myself, so it is plausible.
Why? Are people around here more likely to agree with true propositions than false ones? This might be true in general, but is it true in domains where there exists non-trivial expertise? That’s not obvious to me at all. What makes you think so?
I was generalising from the above. I expect the epistemic hygiene on LW to be significantly higher than the norm.
For any belief b, let Pr(b) be the probability that b is true. Forall b such that b is a consensus on Lesswrong (greater than some k% of Lesswrongers believe b), then Pr(b) > 0.50 is a belief I hold.
But this is an entirely unwarranted generalization!
Broad concepts like “the epistemic hygiene on LW [is] significantly higher than the norm” simply don’t suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise, nor that LessWrongers have any kind of healthy respect for expertise—especially since, in the latter case, we know that they in fact do not.
Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P ⇐ 0.5?
As long as Pr(B|lesswrong consensus) is > 0.5, then Lesswrong consensus remains Bayesian evidence for truth.
For some domains, sure. For others, not.
We have no real reason to expect any particular likelihood ratio here, so should probably default to P = 0.5.
I expect that for most domains (possibly all), Lesswrong consensus is more likely to be right than wrong. I haven’t yet seen reason to believe otherwise; (it seems you have?).
Again, there is nothing special about this. Given that I believe something, even without any consensus at all, I think my belief is more likely to be true than false. I expect this to apply to all domains, even ones that I have not studied. If I thought it did not apply to some domains, then I should reverse all of my beliefs about that domain, and then I would expect it to apply.
I never suggested that there was anything extraordinary about my claim (au contraire, it was quite intuitive) I do not think we disagree.
Just so we’re clear here:
Profession (Results from 2016 LessWrong Survey)
Art: +0.800% 51 2.300%
Biology: +0.300% 49 2.200%
Business: −0.800% 72 3.200%
Computers (AI): +0.700% 79 3.500%
Computers (other academic, computer science): −0.100% 156 7.000%
Computers (practical): −1.200% 681 30.500%
Engineering: +0.600% 150 6.700%
Finance / Economics: +0.500% 116 5.200%
Law: −0.300% 50 2.200%
Mathematics: −1.500% 147 6.600%
Medicine: +0.100% 49 2.200%
Neuroscience: +0.100% 28 1.300%
Philosophy: 0.000% 54 2.400%
Physics: −0.200% 91 4.100%
Psychology: 0.000% 48 2.100%
Other: +2.199% 277 12.399%
Other “hard science”: −0.500% 26 1.200%
Other “social science”: −0.200% 48 2.100%
The LessWrong consensus is massively overweighted in one particular field of expertise (computing) with some marginal commentators who happen to do other things.
As for evidence to believe otherwise, how about all of recorded human history? When has there ever been a group whose consensus was more likely to be right than wrong in all domains of human endeavor? What a ludicrous hubris, the sheer arrogance on display in this comment cowed me, I briefly considered whether I’m hanging out in the right place by posting here.
Let B be the set of beliefs that are consensus among the LW community. Let b be any arbitrary belief. Let Pr(b) be the probability that b is true. Let (b|B) denote the event that b is a member of B.
I argue that Pr(b|B) (Probability that b is true given that b is a member of B) is greater than 0.5; how is that hubris?
If Lesswrongers are ignorant on a particular field, then I don’t expect a consensus to form. Sure, we may have some wrong beliefs that are consensus, but the fraction of right beliefs that are consensus is greater than 1⁄2 (of total beliefs that are consensus).
This entire thread is reason to believe otherwise. We have the LessWrong consensus (sans-serif fonts are easier to read than serif fonts). We have a domain expert posting evidence to the contrary. And we have LessWrong continuing with its priors, because consensus trumps expertise.
I’m not continuing with my priors for one (where do you get that Lesswrong is continuing with its priors?).
It is not clear to me that “serif fonts are easier to read than sans-serif fonts” was ever/is a consensus here. As far as I can tell, fewer than ten people expressed that opinion (and 10 is a very small sample).
1 example (if this was that) wouldn’t detract from my point though. My point is that lesswrong consensus is better than random guessing.