Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.
This is not just intriguing. To me this is the single most significant finding in the survey.
If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.
Yes, I was thinking that. Suppose it takes a certain fixed amount of time for any LessWronger to learn the local official truth. Then if the population grows exponentially, you’d expect the fraction that knows the local official truth to remain constant, right? But I’m not sure the population has been growing exponentially, and even so you might have expected the local official truth to become more accurate over time, and you might have expected the community to get better over time at imparting the local official truth.
Regardless of what we should have expected, my impression is LessWrong as a whole tends to assume that it’s getting closer to the truth over time. If that’s not happening because of newcomers, that’s worth worrying about.
Note that it is possible for newcomers to hold the same inaccurate beliefs as their predecessors while the core improves its knowledge or expands in size. In fact, as LW grows it will have to recruit from, say, Hacker News (where I first heard of LW) instead of Singularity lists, producing newcomers less in tune with the local truth.
(Unnamed’s comment shows interesting differences in opinion between a “core” and the rest, but (s)he seems to have skipped the only question with an easily-verified answer, i.e. Newton.)
The calibration question was more complicated to analyze, but now I’ve looked at it and it seems like core members were slightly more accurate at estimating the correct year (p=.05 when looking at size of the error, and p=.12 when looking at whether or not it was within the 20-year range), but there’s no difference in calibration.
It just means that we’re at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn’t explain the observed degree of variance.
This is not just intriguing. To me this is the single most significant finding in the survey.
It’s also worrying, because it means we’re not getting better on average.
If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.
Yes, I was thinking that. Suppose it takes a certain fixed amount of time for any LessWronger to learn the local official truth. Then if the population grows exponentially, you’d expect the fraction that knows the local official truth to remain constant, right? But I’m not sure the population has been growing exponentially, and even so you might have expected the local official truth to become more accurate over time, and you might have expected the community to get better over time at imparting the local official truth.
Regardless of what we should have expected, my impression is LessWrong as a whole tends to assume that it’s getting closer to the truth over time. If that’s not happening because of newcomers, that’s worth worrying about.
Note that it is possible for newcomers to hold the same inaccurate beliefs as their predecessors while the core improves its knowledge or expands in size. In fact, as LW grows it will have to recruit from, say, Hacker News (where I first heard of LW) instead of Singularity lists, producing newcomers less in tune with the local truth.
(Unnamed’s comment shows interesting differences in opinion between a “core” and the rest, but (s)he seems to have skipped the only question with an easily-verified answer, i.e. Newton.)
The calibration question was more complicated to analyze, but now I’ve looked at it and it seems like core members were slightly more accurate at estimating the correct year (p=.05 when looking at size of the error, and p=.12 when looking at whether or not it was within the 20-year range), but there’s no difference in calibration.
(“He”, btw.)
Couldn’t the current or future data be correlated with length of readership to determine this?
It just means that we’re at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn’t explain the observed degree of variance.