There should be a question at the end: “After seeing your results, how many of the previous responses did you feel a strong desire to write a comment analyzing/refuting?” And that’s the actual rationalist score...
But I’m interested that there might be a phenomenon here where the median LWer is more likely to score highly on this test, despite being less representative of LW culture, but core, more representative LWers are unlikely to score highly.
Presumably there’s some kind of power law with LW use (10000s of users who use LW for <1 hour a month, only 100s of users who use LW for 100+ hours a month).
I predict that the 10000s of less active community members are probably more likely to give “typical” rationalist answers to these questions: “Yeah, (religious) people stupid, ghosts not real, technology good”. The 100s of power users, who are actually more representative of a distinctly LW culture, are less likely to give these answers.
There should be a question at the end: “After seeing your results, how many of the previous responses did you feel a strong desire to write a comment analyzing/refuting?” And that’s the actual rationalist score...
But I’m interested that there might be a phenomenon here where the median LWer is more likely to score highly on this test, despite being less representative of LW culture, but core, more representative LWers are unlikely to score highly.
Presumably there’s some kind of power law with LW use (10000s of users who use LW for <1 hour a month, only 100s of users who use LW for 100+ hours a month).
I predict that the 10000s of less active community members are probably more likely to give “typical” rationalist answers to these questions: “Yeah, (religious) people stupid, ghosts not real, technology good”. The 100s of power users, who are actually more representative of a distinctly LW culture, are less likely to give these answers.
I got 9⁄24, by the way.