2024 Unofficial LessWrong Census/Survey
The Less Wrong General Census is unofficially here! You can take it at this link.
The oft-interrupted annual tradition of the Less Wrong Census is once more upon us!
If you are reading this post and identify as a LessWronger, then you are the target audience. If you are reading this post and don’t identify as a LessWronger, you just read posts here or maybe go to house parties full of rationalists or possibly read rationalist fanfiction and like talking about it on the internet, or you’re not a rationalist you’re just, idk, adjacent, then you’re also the target audience.
If you want to just spend five minutes answering the basic demographics questions before leaving the rest blank and hitting submit, that’s totally fine. The survey is structured so the fastest and most generally applicable questions are (generally speaking) towards the start. At any point you can scroll to the bottom and hit Submit, though you won’t be able to add more answers once you do. It is about 2/3rds the size of last year’s survey if that helps.
The survey shall remain open from now until at least January 1st, 2025. I plan to close it sometime on Jan 2nd.
I don’t work for LessWrong, but I do work on improving rationalist meetups around the world. Once the survey is closed, I plan to play around with the data and write up an analysis post like this one sometime in late January.
Remember, you can take the survey at this link.
Ancient tradition is that if you take the survey you can comment here saying you took the survey, and people upvote you for karma.
Could we have this question be phrased using no negations instead of two? Something like “What is the probability that there will be a global catastrophe that wipes out 90% or more of humanity before 2100.”
Argh, I hate tweaking historical questions. This seems equivalent so lets try it.
It wound up phrased that way trying to make a minimal change from the historical version of the question, where the question and the title were at odds.
I filled out the survey. Thank you so much for running this!
You’re welcome! Thank you for taking it.
At least one of the rot13 questions has a title
P(X and Y)
that doesn’t match the X and Y described in the question.Whelp, that’s a dumb error on my part. Fixed and thank you.
My answers to the IQ questions could seem inconsistent: I’d pay a lot to get a higher IQ, and if it turned out LLM usage decreased my IQ, that would be a much smaller concern. The reason is that I expect AI Alignment work to be largely gated by high IQ, such that a higher IQ might allow me to contribute much more, while a lower IQ might just transform my contribution from neglible to very neglible.
Noted. I’m already expecting marginal values of IQ to be weird since IQ isn’t a linear scale in the first place.
I admit I’m testing a chain of conjectures with those questions and probably will only get weak evidence for my actual question. The feedback is really appreciated!