Comment thread for questions people want to add to the census. You don’t need to articulate the exact question; “We should ask something about pets, but I’m not quite sure what exactly I’m getting at or how to phrase it” is fine.
I’d love to add some questions about discussion norms and expectations on LessWrong.
Over the last year there was some pretty strong disagreement on how to argue with other users. I know of about two and a half posts on how rationalist discourse ought to be conducted, and I’m curious what kind of consensus any of them have. (I suspect mentioning the disagreement can reawaken the disagreement and would be sad if the object level disagreement broke out here. If there is a crux that could be answered by a community census though, there’s conveniently one planned anyway!)
I haven’t checked what questions you currently ask, so maybe all the stuff below is superfluous or off the mark; apologies if so.
Anyway, re: discussion norms & expectations, I figure that questions about mood affiliation might work fine?
E.g. “On a scale of 1 to 5, how pleasant do you find it to engage with commenters on LW?” Or “In comparison to other sites on the Internet, how pleasant do you find LW for discourse?” Or other questions in a similar vein.
Those questions might need to be disambiguated from questions about how (un)pleasant it is to post on LW.
And since one claim was that some styles of discourse push authors away, related questions on that topic would be: “In the last year, I have written less/the same number of/more LW posts”, same for comments. And questions for LW writers who used to write posts, but who now write less or not at all.
I’m also interested in how crossposting authors interact with LW. E.g. we now have a bunch of Substack crossposts here.
Overall, engagement on a feed-based site like LW seems more directly downstream from authors than from commenters, so I’m interested in questions re: how to get more people to (cross)post their stuff here. Especially non-AI stuff. And I wonder how current discussion norms & commenter behavior affect the willingness of authors to do so.
Mood affiliation questions could give a kind of baseline. Offhand, “On a scale of 1 to 7, with 1 being very unpleasant, 4 being neutral, and 7 being very pleasant, how do you feel about engaging with commenters on LW?” seems serviceable? If I went down this path for my own curiosity though, it would be in pursuit of something more specific about figuring out what the expectations or norms are.
I’m sort of suspicious that “in the last year, I have written less/the same number of/more LW posts” wouldn’t get a useful answer because the selection effect has already happened. I’m even assuming I reach people in the first place! Like, if you went from writing a post a month in 2021 to writing zero posts in 2022, and then also had zero posts in 2023, you’d answer “the same number of LW posts.” Asking over a longer timespan works would work, asking people who used to write posts and now write less why they stopped could work. Though for the second, I’d be tempted to put the question somewhere else in the census so as to not blatantly prime people. (Does priming work like that? Replication crisis I think suggests no.)
At least one question about crossposts seems worthwhile but I don’t know what to ask. “If you crosspost, about how hard was it to set that up?” But then we’re trying to strain information out of a small subset of users. “Do you write elsewhere?” and “Do you crosspost to LW?” perhaps catches more.
In general, I want to reduce the number of questions.
I’m gonna suggest 20 questions (should take maybe 2.3 minutes to fill out), which of course is in tension with you wanting to reduce the number of questions. Maybe you could split them out into some extra-optional part of the survey?
I’ve been factor-analyzing hundreds of statements for the purpose of creating a measure of ideology/worldview. I haven’t finished the test yet, but it would still be interesting to include some of the top items for the factors so that the LessWrong data can be observed and compared to my data, once it’s published.
Each item has 5 response options: “Disagree strongly”, “Disagree”, “Neither”, “Agree”, and “Agree strongly”. In total, there are 5 factors, which should be close to independent. Some of the top loading items for each factor are (with “(R)” meaning that it is reverse scored):
Factor 0:
Companies that focus on profit buy up and reduce the wages of companies that try to pay workers more.
The stock market fails to punish powerful people for poor investments because people in power just get the government to bail them out using taxpayer money.
The government has regulations that make financial markets work well and fairly. (R)
The government knows well how to balance costs and benefits. (R)
Factor 1:
Academia has been taken over by a woke culture which suppresses dissent.
Minority groups tend to be biased and favor wokeness over fairness.
You can see from the gender ratios in income and work areas that there’s still tons of sexism around. (R)
Climate science is critically important due to global warming. (R)
Factor 2:
One of the greatest benefits of art is that management can place it in workplaces to set a calming, productive tone.
Brand reputation is the main way consumers know that products are safe and high-quality.
Fashion is a good way to build confidence.
Democratic elections are basically polls about who you trust to lead the country, so democratically elected leaders are considered especially trustworthy.
Factor 3:
Teaching will need to start incorporating AI technology.
Genetically modified organisms will make farming more effective in the future.
AI cannot replace designers as computers lack creativity. (R)
Elon Musk’s project of colonizing Mars is a useless vanity project. (R)
Factor 4:
To save the environment, people should eat seasonal locally grown food instead of importing food from across the world.
Claims that it’s soon the end of the world are always hyperbolic and exaggerated.
It is important that the news is run independent of the government so it can serve as a check on those in power.
This sure seems like a well written set of very politically charged questions. Thank you for stepping into the gap where I sure would not have added anything by myself.
Right now I’m tempted to replace most of section 14: Bonus Politics Questions and replace it with your set here. There’s eleven questions there now, of which the only two I like are “If you are an American, what party are you registered with?” and “How would you describe your level of interest in politics?” Do your twenty work as a set, or do you by chance have a favourite ten?
Context of that last question: if you had clear favourite ten, I could keep the two bonus politics questions I liked and replace the others with your ten, giving us 12 instead of last year’s 11. At ~22 I’d want to make cuts elsewhere to try and keep the length from growing too much.
Do your twenty work as a set, or do you by chance have a favourite ten?
The factors work as a set; they have been selected based on a factor analysis of over 400 statements, to capture things which influence as much of one’s worldview as possible. But this makes the item lists for each factor essentially arbitrary, such that they can be easily substituted or expanded or shortened, without changing the core idea much.
I guess if you want to shorten it by 2x, you could remove the 2nd and the 4th item for factor 0, 1, and 2; and remove the 3rd and the 4th item for factor 2, and remove the 2nd and the 3rd item for factor 4. I wouldn’t recommend this though as the items are usually only 0.4-0.6 correlated with the factors, so to obtain more accurate measurement of the worldview, more items would help. (With 4 items that each have a correlation of 0.5 with the underlying factor, the scales’ correlation with the factor would be 0.76; meanwhile with only 2 items, it would be 0.63.)
I would not like if the question “If you are an American, what party are you registered with?” were one of very few politics questions. It is too country-specific.
I have a research idea in mind—I would like to know how by how certain expectations shape peoples’ decisions. In addition to certain questions already in the survey, the question suggestions for this are:
Were you surprised by the capabilities of chatgpt?
2. Self-rate your knowledge of Global income and wealth distributions AI Geopolitics Climate change Practical ethics Animal suffering Effective interventions to help the poor
3. Self-rate your social skills on a scale from 0 to 10
4. Suffering of poor people that live today touches me emotionally: 0 to 10 scale 5. Suffering of animals that live today touches me emotionally: 0 to 10 scale 6. Whether people 10,000 years in the future exist touches me emptionally: 0 to 10 scale 7. Whether humanity will go extinct within the next 100 years touches me emptionally: 0 to 10 scale.
8. I rate my expectations as: - insert “Noisy to well-calibrated” scale - insert “Biased towards optimism to biased towards pessimism” scale
9. I believe that the median human’s life in 2040 compared to today will be (your median expectation):
better than today
worse than today
doesn’t apply because humans will be extinct
other answer:
10. I do not have more children than I have because: Lack of a partner Unwilling partner This is my ideal family size More are planned or expected I don’t have time Personal finance reasons Personal Biology reasons It is more important to help others who exist I think the future is not livable I think they would be born into a short life and/or suffer Later is better Other reasons:
11. My overall happiness: (0 to 10 scale)
12. I expect to live to an age of:
13. I save a relevant amount of money or other resources for old age: - No, because I do not expect to live long enough - No, because I expect an age of abundance - No, but I think I should - No, for other reasons - Yes
I would benefit from hearing what secondary sources (aka not papers or blog posts written by researchers about their research) people find useful for learning about AI alignment research.
Hrm. The laziest version of that is a free response section. A slightly better version might be multiple select checkboxes with an “Other, full in your own” option. What secondary sources are there?
If I keep on that thought and combine it with an inclination to make questions be answerable by as many people as possible, I notice I find out about new AI alignment research mostly via Twitter. (I am not an AI researcher.) Would you only be interested in answers from researchers?
I’d like it to cover the community of people interested in these resources, and not be selected for people who read open threads or people who are willing to answer publicly.
The census answers you’ll get to read are the census answers people are willing to have be public. I guess it’s not attached to their names, which is maybe what you meant?
Comment thread for questions people want to add to the census. You don’t need to articulate the exact question; “We should ask something about pets, but I’m not quite sure what exactly I’m getting at or how to phrase it” is fine.
I’d love to add some questions about discussion norms and expectations on LessWrong.
Over the last year there was some pretty strong disagreement on how to argue with other users. I know of about two and a half posts on how rationalist discourse ought to be conducted, and I’m curious what kind of consensus any of them have. (I suspect mentioning the disagreement can reawaken the disagreement and would be sad if the object level disagreement broke out here. If there is a crux that could be answered by a community census though, there’s conveniently one planned anyway!)
I haven’t checked what questions you currently ask, so maybe all the stuff below is superfluous or off the mark; apologies if so.
Anyway, re: discussion norms & expectations, I figure that questions about mood affiliation might work fine?
E.g. “On a scale of 1 to 5, how pleasant do you find it to engage with commenters on LW?” Or “In comparison to other sites on the Internet, how pleasant do you find LW for discourse?” Or other questions in a similar vein.
Those questions might need to be disambiguated from questions about how (un)pleasant it is to post on LW.
And since one claim was that some styles of discourse push authors away, related questions on that topic would be: “In the last year, I have written less/the same number of/more LW posts”, same for comments. And questions for LW writers who used to write posts, but who now write less or not at all.
I’m also interested in how crossposting authors interact with LW. E.g. we now have a bunch of Substack crossposts here.
Overall, engagement on a feed-based site like LW seems more directly downstream from authors than from commenters, so I’m interested in questions re: how to get more people to (cross)post their stuff here. Especially non-AI stuff. And I wonder how current discussion norms & commenter behavior affect the willingness of authors to do so.
Mood affiliation questions could give a kind of baseline. Offhand, “On a scale of 1 to 7, with 1 being very unpleasant, 4 being neutral, and 7 being very pleasant, how do you feel about engaging with commenters on LW?” seems serviceable? If I went down this path for my own curiosity though, it would be in pursuit of something more specific about figuring out what the expectations or norms are.
I’m sort of suspicious that “in the last year, I have written less/the same number of/more LW posts” wouldn’t get a useful answer because the selection effect has already happened. I’m even assuming I reach people in the first place! Like, if you went from writing a post a month in 2021 to writing zero posts in 2022, and then also had zero posts in 2023, you’d answer “the same number of LW posts.” Asking over a longer timespan works would work, asking people who used to write posts and now write less why they stopped could work. Though for the second, I’d be tempted to put the question somewhere else in the census so as to not blatantly prime people. (Does priming work like that? Replication crisis I think suggests no.)
At least one question about crossposts seems worthwhile but I don’t know what to ask. “If you crosspost, about how hard was it to set that up?” But then we’re trying to strain information out of a small subset of users. “Do you write elsewhere?” and “Do you crosspost to LW?” perhaps catches more.
I like the train of thought!
I’m gonna suggest 20 questions (should take maybe 2.3 minutes to fill out), which of course is in tension with you wanting to reduce the number of questions. Maybe you could split them out into some extra-optional part of the survey?
I’ve been factor-analyzing hundreds of statements for the purpose of creating a measure of ideology/worldview. I haven’t finished the test yet, but it would still be interesting to include some of the top items for the factors so that the LessWrong data can be observed and compared to my data, once it’s published.
Each item has 5 response options: “Disagree strongly”, “Disagree”, “Neither”, “Agree”, and “Agree strongly”. In total, there are 5 factors, which should be close to independent. Some of the top loading items for each factor are (with “(R)” meaning that it is reverse scored):
Factor 0:
Companies that focus on profit buy up and reduce the wages of companies that try to pay workers more.
The stock market fails to punish powerful people for poor investments because people in power just get the government to bail them out using taxpayer money.
The government has regulations that make financial markets work well and fairly. (R)
The government knows well how to balance costs and benefits. (R)
Factor 1:
Academia has been taken over by a woke culture which suppresses dissent.
Minority groups tend to be biased and favor wokeness over fairness.
You can see from the gender ratios in income and work areas that there’s still tons of sexism around. (R)
Climate science is critically important due to global warming. (R)
Factor 2:
One of the greatest benefits of art is that management can place it in workplaces to set a calming, productive tone.
Brand reputation is the main way consumers know that products are safe and high-quality.
Fashion is a good way to build confidence.
Democratic elections are basically polls about who you trust to lead the country, so democratically elected leaders are considered especially trustworthy.
Factor 3:
Teaching will need to start incorporating AI technology.
Genetically modified organisms will make farming more effective in the future.
AI cannot replace designers as computers lack creativity. (R)
Elon Musk’s project of colonizing Mars is a useless vanity project. (R)
Factor 4:
To save the environment, people should eat seasonal locally grown food instead of importing food from across the world.
Claims that it’s soon the end of the world are always hyperbolic and exaggerated.
It is important that the news is run independent of the government so it can serve as a check on those in power.
The moon landing was faked. (R)
This sure seems like a well written set of very politically charged questions. Thank you for stepping into the gap where I sure would not have added anything by myself.
Right now I’m tempted to replace most of section 14: Bonus Politics Questions and replace it with your set here. There’s eleven questions there now, of which the only two I like are “If you are an American, what party are you registered with?” and “How would you describe your level of interest in politics?” Do your twenty work as a set, or do you by chance have a favourite ten?
Context of that last question: if you had clear favourite ten, I could keep the two bonus politics questions I liked and replace the others with your ten, giving us 12 instead of last year’s 11. At ~22 I’d want to make cuts elsewhere to try and keep the length from growing too much.
The factors work as a set; they have been selected based on a factor analysis of over 400 statements, to capture things which influence as much of one’s worldview as possible. But this makes the item lists for each factor essentially arbitrary, such that they can be easily substituted or expanded or shortened, without changing the core idea much.
I guess if you want to shorten it by 2x, you could remove the 2nd and the 4th item for factor 0, 1, and 2; and remove the 3rd and the 4th item for factor 2, and remove the 2nd and the 3rd item for factor 4. I wouldn’t recommend this though as the items are usually only 0.4-0.6 correlated with the factors, so to obtain more accurate measurement of the worldview, more items would help. (With 4 items that each have a correlation of 0.5 with the underlying factor, the scales’ correlation with the factor would be 0.76; meanwhile with only 2 items, it would be 0.63.)
I would not like if the question “If you are an American, what party are you registered with?” were one of very few politics questions. It is too country-specific.
I have a research idea in mind—I would like to know how by how certain expectations shape peoples’ decisions. In addition to certain questions already in the survey, the question suggestions for this are:
Were you surprised by the capabilities of chatgpt?
2. Self-rate your knowledge of
Global income and wealth distributions
AI
Geopolitics
Climate change
Practical ethics
Animal suffering
Effective interventions to help the poor
3. Self-rate your social skills on a scale from 0 to 10
4. Suffering of poor people that live today touches me emotionally: 0 to 10 scale
5. Suffering of animals that live today touches me emotionally: 0 to 10 scale
6. Whether people 10,000 years in the future exist touches me emptionally: 0 to 10 scale
7. Whether humanity will go extinct within the next 100 years touches me emptionally: 0 to 10 scale.
8. I rate my expectations as:
- insert “Noisy to well-calibrated” scale
- insert “Biased towards optimism to biased towards pessimism” scale
9. I believe that the median human’s life in 2040 compared to today will be (your median expectation):
better than today
worse than today
doesn’t apply because humans will be extinct
other answer:
10. I do not have more children than I have because:
Lack of a partner
Unwilling partner
This is my ideal family size
More are planned or expected
I don’t have time
Personal finance reasons
Personal Biology reasons
It is more important to help others who exist
I think the future is not livable
I think they would be born into a short life and/or suffer
Later is better
Other reasons:
11. My overall happiness:
(0 to 10 scale)
12. I expect to live to an age of:
13. I save a relevant amount of money or other resources for old age:
- No, because I do not expect to live long enough
- No, because I expect an age of abundance
- No, but I think I should
- No, for other reasons
- Yes
I would benefit from hearing what secondary sources (aka not papers or blog posts written by researchers about their research) people find useful for learning about AI alignment research.
Hrm. The laziest version of that is a free response section. A slightly better version might be multiple select checkboxes with an “Other, full in your own” option. What secondary sources are there?
If I keep on that thought and combine it with an inclination to make questions be answerable by as many people as possible, I notice I find out about new AI alignment research mostly via Twitter. (I am not an AI researcher.) Would you only be interested in answers from researchers?
I think a free response section would be fine. For suggestions for checkboxes, I’d start with this survey I ran in 2022 and comments on that post.
I’m not only interested in answers from researchers, but it would be good to break it down by that.
Maybe that is a question for the Open Thread or just a general forum question instead of a survey question?
Survey question gets me more and more representative responses.
In which sense do you need the answer to be “representative”?
I’d like it to cover the community of people interested in these resources, and not be selected for people who read open threads or people who are willing to answer publicly.
The census answers you’ll get to read are the census answers people are willing to have be public. I guess it’s not attached to their names, which is maybe what you meant?