If this survey generates interesting psychometric research, someone might try to get a journal article out of it. If so, we will need your explicit consent to have (an aggregate of) your anonymous data published.
Someone? Who is someone? Honestly, I’m curious, because I can’t think of who these someones would be. What research psychologists who aren’t already members of Less Wrong pay attention to it? I doubt it would be Scott using it to publish something(?) Maybe he shares the results on Slate Star Codex, and a psychiatrist friend of his considers Less Wrong an interesting subject pool, so they use the data? This is something I’d want to know before I give consent. Like, even a rough description of who Scott conceives of as using this data would make me more comfortable.
For the ‘Gender’, and ‘Sexual Orientation’ categories, why not allow the ‘Other’ radio button to instead be a text box subjects can fill in themselves?
I’ve got a hunch that within the Less Wrong community there’s enough of a diversity of mathematicians that we’d discover something interesting if the type of career as mathematician as subject could choose from was split into two options of ‘applied mathematics’, and ‘pure mathematics’. By interesting, I mean the ratio of one type to the other might reflect how the community thinks about mathematics, especially regarding issues relevant to the MIRI. From there, we could try to infer what’s going on. If nothing interesting is going on there, then my hypothesis is falsified, and the survey tried something new that in no way threatened its integrity.
This is my graph as a sample. Taking the Political Compass Test can be done in five minutes. If greater quantification allows for much more potential value to glean results by mining the raw data, then an extra five minutes on an already long survey is a small price for subjects to pay. There could be two text boxes for each subject taking the survey to input their scores along each axis into them, and then, if there’s a spreadsheet program behind the survey somewhere, it can organize the results for you. What Scott can tell us by having that data analyzed however it can might be more useful than the discrete radio buttons of surveys past.
If that’s too much data for Scott to handle alone, then we can ask our friend Peter Hurford, the data scientist with a double major in psychology and political science if the effective altruist distributed volunteer task force he coordinates, .impact, would be willing to help. Honestly, I can only imagine Scott is incredibly busy, and outsourcing it to a trustworthy few willing to do so with their own spare time anyway might be worthwhile.
I personally would appreciate if the ‘Moral Views’ question could be converted to a rating system. There could be set of Likert scales for how much you identify with virtue ethics, consequentialism, and deontology. Or, instead, there could be a ranking system for which you identify which ones you feel are closest to your own, from most to least. Again, this offers a finer grain of data, so more value of information to be had.
Part Five keeps me out because I’ve never taken an IQ test, and getting one off the Internet can be too expensive. Additionally, I’m Canadian, so the scores on all the other tests aren’t relevant to me. Not that I mind, much, but it might be something to think about for next time, as maybe at least 1⁄3 of subjects won’t be able to respond that way.
my opinion as a single data point: this survey is just long enough. I’ve learned a lot more on Less Wrong since last year. My technical comfort zone has expanded, so when I read questions about, e.g., assigning probabilities to future events, my eyes don’t glaze over as much as. So, the survey feels shorter than last year. I remember the survey from last year was so long that I didn’t even bother.
If you’re including a poll on opinions of feminism, I’d be interested to see one on the men’s rights activism movement as well.
Someone? Who is someone? Honestly, I’m curious, because I can’t think of who these someones would be. What research psychologists who aren’t already members of Less Wrong pay attention to it? I doubt it would be Scott using it to publish something(?) Maybe he shares the results on Slate Star Codex, and a psychiatrist friend of his considers Less Wrong an interesting subject pool, so they use the data? This is something I’d want to know before I give consent. Like, even a rough description of who Scott conceives of as using this data would make me more comfortable.
For the ‘Gender’, and ‘Sexual Orientation’ categories, why not allow the ‘Other’ radio button to instead be a text box subjects can fill in themselves?
I’ve got a hunch that within the Less Wrong community there’s enough of a diversity of mathematicians that we’d discover something interesting if the type of career as mathematician as subject could choose from was split into two options of ‘applied mathematics’, and ‘pure mathematics’. By interesting, I mean the ratio of one type to the other might reflect how the community thinks about mathematics, especially regarding issues relevant to the MIRI. From there, we could try to infer what’s going on. If nothing interesting is going on there, then my hypothesis is falsified, and the survey tried something new that in no way threatened its integrity.
The Political Compass Test generates quantified results on a two-dimensional axis:
x-axis: left-right y-axis: authoritarian-libertarian
This is my graph as a sample. Taking the Political Compass Test can be done in five minutes. If greater quantification allows for much more potential value to glean results by mining the raw data, then an extra five minutes on an already long survey is a small price for subjects to pay. There could be two text boxes for each subject taking the survey to input their scores along each axis into them, and then, if there’s a spreadsheet program behind the survey somewhere, it can organize the results for you. What Scott can tell us by having that data analyzed however it can might be more useful than the discrete radio buttons of surveys past.
If this sort of thing is worth doing with made-up statistics, imagine the value of information we’ll get by using real ones.
If that’s too much data for Scott to handle alone, then we can ask our friend Peter Hurford, the data scientist with a double major in psychology and political science if the effective altruist distributed volunteer task force he coordinates, .impact, would be willing to help. Honestly, I can only imagine Scott is incredibly busy, and outsourcing it to a trustworthy few willing to do so with their own spare time anyway might be worthwhile.
I personally would appreciate if the ‘Moral Views’ question could be converted to a rating system. There could be set of Likert scales for how much you identify with virtue ethics, consequentialism, and deontology. Or, instead, there could be a ranking system for which you identify which ones you feel are closest to your own, from most to least. Again, this offers a finer grain of data, so more value of information to be had.
Part Five keeps me out because I’ve never taken an IQ test, and getting one off the Internet can be too expensive. Additionally, I’m Canadian, so the scores on all the other tests aren’t relevant to me. Not that I mind, much, but it might be something to think about for next time, as maybe at least 1⁄3 of subjects won’t be able to respond that way.
my opinion as a single data point: this survey is just long enough. I’ve learned a lot more on Less Wrong since last year. My technical comfort zone has expanded, so when I read questions about, e.g., assigning probabilities to future events, my eyes don’t glaze over as much as. So, the survey feels shorter than last year. I remember the survey from last year was so long that I didn’t even bother.
If you’re including a poll on opinions of feminism, I’d be interested to see one on the men’s rights activism movement as well.