Lesswrong 2016 Survey
It’s time for a new survey!
The details of the last survey can be found here. And the results can be found here.
I posted a few weeks back asking for suggestions for questions to include on the survey. As much as we’d like to include more of them, we all know what happens when we have too many questions. The following graph is from the last survey.
http://i.imgur.com/KFTn2Bt.png
(Source: JD’s analysis of 2014 survey data)
Two factors seem to predict if a question will get an answer:
-
The position
-
Whether people want to answer it. (Obviously)
People answer fewer questions as we approach the end. They also skip tricky questions. The least answered question on the last survey was—“what is your favourite lw post, provide a link”. Which I assume was mostly skipped for the amount of effort required either in generating a favourite or in finding a link to it. The second most skipped questions were the digit-ratio questions which require more work, (get out a ruler and measure) compared to the others. This is unsurprising.
This year’s survey is almost the same size as the last one (though just a wee bit smaller). Preliminary estimates suggest you should put aside 25 minutes to take the survey, however you can pause at any time and come back to the survey when you have more time. If you’re interested in helping process the survey data please speak up either in a comment or a PM.
We’re focusing this year particularly on getting a glimpse of the size and shape of the LessWrong diaspora. With that in mind; if possible—please make sure that your friends (who might be less connected but still hang around in associated circles) get a chance to see that the survey exists; and if you’re up to it—encourage them to fill out a copy of the survey.
The survey is hosted and managed by the team at FortForecast, you’ll be hearing more from them soon. The survey can be accessed through http://lesswrong.com/2016survey.
Survey responses are anonymous in that you’re not asked for your name. At the end we plan to do an opt-in public dump of the data. Before publication the row order will be scrambled, datestamps, IP addresses and any other non-survey question information will be stripped, and certain questions which are marked private such as the (optional) sign up for our mailing list will not be included. It helps the most if you say yes but we can understand if you don’t.
Thanks to Namespace (JD) and the FortForecast team, the Slack, the #lesswrong IRC on freenode, and everyone else who offered help in putting the survey together, special thanks to Scott Alexander whose 2014 survey was the foundation for this one.
When answering the survey, I ask you be helpful with the format of your answers if you want them to be useful. For example if a question asks for an number, please reply with “4” not “four”. Going by the last survey we may very well get thousands of responses and cleaning them all by hand will cost a fortune on mechanical turk. (And that’s for the ones we can put on mechanical turk!) Thanks for your consideration.
The survey will be open until the 1st of may 2016
Addendum from JD at FortForecast: During user testing we’ve encountered reports of an error some users get when they try to take the survey which erroneously reports that our database is down. We think we’ve finally stamped it out but this particular bug has proven resilient. If you get this error and still want to take the survey here are the steps to mitigate it:
-
Refresh the survey, it will still be broken. You should see a screen with question titles but no questions.
-
Press the “Exit and clear survey” button, this will reset your survey responses and allow you to try again fresh.
-
Rinse and repeat until you manage to successfully answer the first two questions and move on. It usually doesn’t take more than one or two tries. We haven’t received reports of the bug occurring past this stage.
If you encounter this please mail jd@fortforecast.com with details. Screenshots would be appreciated but if you don’t have the time just copy and paste the error message you get into the email.
Meta—this took 2 hours to write and was reviewed by the slack.
My Table of contents can be found here.
- 2016 LessWrong Diaspora Survey Results by 14 May 2016 17:38 UTC; 48 points) (
- My future posts; a table of contents. by 30 Aug 2015 22:27 UTC; 31 points) (
- Lesswrong Survey—invitation for suggestions by 8 Feb 2016 8:07 UTC; 17 points) (
- 30 Mar 2016 23:03 UTC; 10 points) 's comment on Open Thread March 28 - April 3 , 2016 by (
- Requesting Questions For A 2017 LessWrong Survey by 9 Apr 2017 0:48 UTC; 8 points) (
- Lesswrong Diaspora survey by 3 Apr 2016 11:25 UTC; 5 points) (EA Forum;
- 3 Apr 2016 19:49 UTC; 0 points) 's comment on Lesswrong Diaspora survey by (EA Forum;
I am literally pregnant right now and wasn’t sure how to answer the ones about how many children I have or if I plan more. (I went with “one” and “uncertain” but could have justified “zero” and “yes”).
Congratulations!
My wife is also pregnant right now, and I strongly felt that I should include my unborn child in the count.
Elo, thanks a lot for doing this.
(for the record, Elo tried really hard to get me involved and I procrastinated helping and forgot about it. I 100% endorse this.)
My only suggestion is to create a margin of error on the calibration questions, eg “How big is the soccer ball, to within 10 cm?”. Otherwise people are guessing whether they got the exact centimeter right, which is pretty hard.
Since you are such a huge part of the diaspora community I would be delighted if you could share the survey to both your readers and your friends.
We will get that suggestion sorted asap.
I actually can’t do that. The way our survey engine works changing the question answers mid-survey would require taking it down for maintenance and hand-joining the current respondents to the new respondents. In general I planned to handle the “within 10 cm” thing during analysis. Try to fermi estimate the value and give your closest answer, then the probability you got it right. We can look at how close your confidence was to a sane range of values for the answer.
I.E, if you got it within ten and said you had a ten percent chance of getting it right you’re well calibrated.
Note: I am not entirely sure this is sane, and would like feedback on better ways to do it.
EDIT: I should probably be very precise here. I cannot change the question answers in the software, presumably because it would involve changing the underlying table schema for the database. I can change the question/ question descriptions so if there’s a superior process for answering these I could describe it there.
But unless I’m misunderstanding you, the size of the unspoken “sane range” is the entire determinant of how you should calibrate yourself.
Suppose you ask me when Genghis Khan was born, and all I know is “sometime between 1100 and 1200, with certainty”. Suppose I choose 1150. If you require the exact year, then I’m only right if it was exactly 1150, and since it could be any of 100 years my probability is 1%. If you require within five years, then I’m right if it was any time between 1145 and 1155, so my probability is 10%. If you require within fifty years, then my probability is effectively 100%. All of those are potential “sane ranges”, but depending on which one you the correctly calibrated estimate could be anywhere from 1% to 100%.
Unless I am very confused, you might want to change the questions and hand-throw-out all the answers you received before now, since I don’t think they’re meaningful (except if interpreted as probability of being exactly right).
(Actually, it might be interesting to see how many people figure this out, in a train wreck sort of way.)
PS: I admit this is totally 100% my fault for not getting around to looking at it the five times you asked me to before this.
Yeah, you’re right.
Currently trying to figure out how to do that in the least intrusive way.
EDIT: Good news it turns out that I can edit the calibration question ‘answers’ after all. The ones where a range would make sense have been edited to include one. Questions such as “which is heavier” have not been because the ignorance prior should be fairly obvious.
Fri Mar 25 19:50:41 PDT 2016 | Answers on or before this date where the ranges have been added will be controlled for at analysis time.
If you throw out the data, I request you keep the thrown-out data somewhere else so I can see how people responded to the issue.
I don’t throw out data. Ever. I only control for it. (Well barring exceptional circumstances.)
Even if he threw out the data I have recurring storage snapshots happening behind the scenes (on the backing store for the OSes involved.)
[Survey Taken Thread]
Let’s make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.
I have taken the survey.
I have taken the survey.
I have taken the survey. I did not treat the metaphysical probabilities as though I had a measure over them, because I don’t.
Similarly, I gave self-conscious nonsense numbers when asked for subjective probabilities for most things, because I really did not have an internal model with few-enough free parameters (and placement of causal arrows can be a free parameter!) to think of numerical probabilities.
So I may be right about a few of the calibration questions, but also inconsistently confident, since I basically put down low (under 33%) chances of being correct for all the nontrivial ones.
Also, I left everything about “Singularities” blank, because I don’t consider the term well-defined enough, even granting “intelligence explosions”, to actually talk about it coherently. I’d be a coin flip if you asked me.
So basically, sorry for being That Jerk who ruins the survey by favoring superbabies and restorative gerontology, disbelieving utterly in cryonics and the Singularity, and having completely randomized calibration results.
I have taken the survey.
I have taken the survey.
I have taken the survey. Yesterday.
I have taken the survey
I took the survey 2 days ago. It was fun. I think I was well calibrated for those calibration questions, but sadly there was no “results” section.
Is it possible to self-consistently believe you’re poorly calibrated? If you believe you’re overconfident then you would start making less confident predictions right?
Being poorly calibrated can also mean you’re inconsistent between being overconfident and underconfident.
You can be imperfectly synchronised across contexts & instances.
I have taken the survey. I like the new format.
I have taken the survey.
I’ve taken the survey.
Yet another survey be-takener here.
I have taken the survey.
I took the survey
I have taken the survey.
Survey: taken.
I have taken the survey.
I have taken the survey.
I have taken the survey.
The survey has been taken by me.
Survey achieved.
I have taken the survey.
It is done. (The survey. By me.)
I took the survey.
I have taken the survey. :)
I have taken the survey.
I have taken the survey. I left a lot of questions blank though, because I really have no opinion about many of them.
Survey taken.
Just finished. I’m sure my calibration was terrible though.
I have taken the survey.
Took the survey, had the recurring survey confusion about some questions. For instance, I think some taxes should be higher and others should be lower. Saying I have no strong opinion is inaccurate but at least it seemed like the least inaccurate answer.
I took it.
Me too.
RE: The survey: I have taken it.
I assume the salary question was meant to be filled in as Bruto, not netto. However that could result in some big differences depending on the country’s tax code...
Btw, I liked the professional format of the test itself. Looked very neat.
I took the survey!
I have taken the survey.
I have taken the survey.
I have taken the survey.
I did My Part!
I have taken the survey.
Took it!
It ended somewhat more quickly this time.
I have taken the survey.
I have taken the survey.
Survey Taken
I have taken the survey
Took survey. Didn’t answer all the questions because I suspend judgment on a lot of issues and there was no “I have no idea” option. Some questions did have an “I don’t have a strong opinion” option, but I felt a lot more of them should also have that option.
I have taken the survey.
I have taken the survey.
I have taken the survey.
I completed the survey. I also like the new format—easy to read, good instructions etc.
For a few moments I was paralyzed with uncertainty about how humorous to try to make my “I took the survey” response, since many seemed to have made a similar attempt, thus this post took longer to finish than the survey itself, which I have taken.
I have taken the survey.
I have taken the survey.
The only option i think was missing was in the final questions about quantities donated to charities, an option such as “I intend to donate more before the end of the financial year” or similar. (and while likely not feasible, following up on those people in the next survey to see if they actually donated would be interesting)
Yar, have taken the scurvy survey, says I!
I have taken the survey.
I too have take the survey.
I have taken the survey.
I took the survey.
I have taken the survey.
I have taken the survey.
((past-tense take) i survey)
You’ve got a slight lisp there ;)
I have taken the survey.
I took the survey!
I have taken the survey.
I have taken the survey
I have taken the survey.
I have taken the survey.
I think I spent about 1 hour and 20 minutes answering almost all of the questions. I’m probably just unusually slow. :P
Survey taken. By me, even.
Took the survey, and as others pointed out had some trouble with the questions about income (net? gross?) Also, is there any place where all the reading (fanfiction, books, blogs) hinted to in the survey are collected? I knew (and have read) some, but many I have never heard of, and would like to find out more.
Did it.
I have taken the survey
For the interests of identity obfuscation, I have rolled a random number between 1 and 100, and have waited for some time afterwards.
On a 1-49: I have taken the survey, and this post was made after a uniformly random period of up to 24 hours.
On a 50-98: I will take the survey after a uniformly random period of up to 72 hours.
On a 99-100: I have not actually taken the survey. Sorry about that, but this really has to be a possible outcome.
Have a 98% chance of an upvote.
I have taken the survey.
I’ve taken the survey.
Thanks Huluk for creating this subthread, very handy when reading others’ comments about the survey itself.
I have taken the survey.
I took the survey.
Was taking it, and it crashed with a “This webpage is not available” error.
We had some power outage related downtime for three hours or so, should be back up now.
I’m a little unclear on how to proceed. I didn’t establish a “save”, so I can’t really resume the survey. Does that mean I should start a new survey and pick up where I left off, or … ?
If you’d be willing to go through the trouble of doing it, yes that’s exactly what you should do. I didn’t think of that, thanks.
Though from a data-consistency perspective people doing this would skew our response rate higher than it really is, I’d rather have the question data than an accurate response rate though so. shrug
On the session timeout front, we’re trying something out to make the sessions longer, which should cut down on that particular problem significantly.
Survey taken.
Besides saying that I have taken the survey...
I would also like to mention that the predictions of probabilities of unobservable concepts was the hardest one for me. Of course, there are some in which i believe more than in some others, but still, any probability besides 0% or 100% seems really strange for me. For something like being in a simulation, if I would believe it but have some doubts, saying 99%, or if I would not believe but being open to it and saying 1%, these seem so arbitrary and odd for me. 1% is really huge in the scope of very probable or very improbable concepts which cannot be tested yet (and some may never ever be).
… before losing my sanity in trying to choose the percentages I would find plausible at least a few minutes later, I had to fill them based on my current gut feelings instead of Fermi estimation-like calculations.
I have taken the survey
Me, too! I’ve taken the survey and would like to receive some free internet points.
I have taken the survey.
I’ve taken the survey.
I’ve taken the survey.
I’ve taken the survey.
I completed the survey. Elo, thanks for organising this!
I have taken the survey.
I enjoyed the “yes, I worry about X, but only because I worry about everything” responses.
I really liked things like “option for people who aren’t in the US and want an option to choose” plus I think I recall one like “I like clicking on options” :D
Survey has been taken.
Me! Me! I totally took the survey!
I’ve said it before and I’ve said it again—this is mild cult behavior.
… That being said, bring on the low cost gratification! I’ve taken the survey!
Fun traditions might be undignified by the standards of academia, but they’re perfectly normal in many other social contexts (small company, group house, etc.)
You know what else exemplifies “mild cult behavior”? Burning Man! They give each other physical gifts instead of imaginary internet gifts. Even more problematic.
If you are willing to define “cult” broadly enough, you can use the term to shut down any kind of cultural development. (Of course, cultural development that’s already happened will get grandfathered in, the same way we don’t call religions “cults” because they are too dignified and established.)
No, I don’t think it does. Burning Man is an event and a community. I don’t see any cultish tendencies around it.
I was being sarcastic. My point was that the “cult” label can be hard to shake whether or not it’s deserved—I analogized to Burning Man since it shares characteristics with LW, but was lucky enough to avoid getting labeled a “cult”.
You really think it was just luck that BM didn’t get a “cult” label and LW did..?
I did not mean to say that it was “just” luck, but of course luck played a role (as it always does).
Took the survey before joining.
I have taken the survey
I have taken the survey.
I have taken the survey! Please reward my compliance.
I have taken the survey. It was fun, thanks!
Lo, I have taken the survey.
Taken.
I have taken the survey.
I have taken the survey. It was interesting, thanx to those who made it!
Oh right, I forgot this part. I have taken the survey (like two weeks ago)
I have taken the survey.
Newbie, done.
I took the survey for the 2nd year in a row. Can’t wait to see the results.
META
Why are some people upvoted more or less than others? I predicted I would be far less upvoted than others because of my controversiality but I am one of the highest upvoted here. From memory, this can’t be explained by the recency of others posting that they have completed the survey. In light of what I see here, I will re-evaluate my entire post history. If people are biased towards me rather than away, that changes my entire posting strategy.
This has come up before. Then, it looked like gwern and I both got a boost from name recognition, but for everyone else it was just dependent on when they took the survey.
If I come by every day and upvote everyone, before I come that day a fraction of the people will have upvotes from me and another fraction won’t, determined by time. Now add a bunch of people doing similar things but at different schedules (or only upvoting everyone who took it before they did, and not anyone who took it after, because they don’t come back to this page).
Yup. Pretty sure the dominant thing is just that people who report having taken the survey earlier get more upvotes.
I see 20-30 (didn’t count) comments in the thread so far, probably people are too lazy to upvote every one more than they vet who they upvote here, I think.
Are you going to agree with everyone now, because it’s more controversial to do so?
Wait! Crap! I already replied. Good thing I caught this before anyone upvoted it.
I’d like to make a miniature announcement so there isn’t any confusion:
Most of the time when somebody writes in a suggestion for improving the questions I don’t reply to it, I just silently upvote the post and write down the question in a list of things to do for the next survey. But I am reading them, and I plan to go through and read them again before I wrap up the final survey analysis.
It’s probably too late to change this now, but I have a slight nitpick with some of the political questions.
Many of them use “No strong opinion” as the default between more and less. But I believe that leaves out those who have a strong opinion that the current level of, say, taxation is correct.
The question “How Long Since You Last Posted On LessWrong?” is ambiguous—I don’t know if posting includes comments or just top-level posts.
I assumed it was about comments, because only a handful of people would have posted a top-level post to LW ‘today.’
I think the ambiguity is kinda resolved by the fact that the previous question was about comments and this one would be largely redundant if interprete as also about comments. Also, the timescales in the question make better sense in reference to actual posts.
I agree it would be better if a bit more explicit, though.
I assumed it was about posts, because of the wording—it said “posted” not “commented”!
Taken.
BTW, in the global warning question I took “significant” to mean “much larger than typical natural variability over the same timescales”. My answer would have been higher if it meant “much larger than measurement uncertainties”, lower if it meant “likely to have negative effects much larger than the cost of averting the warning would have been”, and even lower if it meant “much larger than typical natural variability over any timescales”.
Ick. I was annoyed with the Global Warming question. Without a timescale and an objective definition of “significant”, there’s no particular meaning to the question besides signaling team membership.
I left it blank because of the vagueness. I wonder if the vagueness will have a biased or unbiased effect on those who decline to respond.
Suggestion for next year’s survey:
Reword the question in the Probabilities section to “What is the probability that the average temperatures on Earth (averaged over a few decades) are rising at a much higher rate than typical of natural variability, largely as a result of human activities?”
Add a question to the Politics section reading “How would you describe your opinion on efforts to contain global warming?”
The contrast on the side-by-side options is way too low (clicking a dark blue text bubble turns it a slightly darker blue).
Surveiled!
Great survey!
However, when you save your progress and are asked to save a password, there’s no indication that it will be sent to you in an email or saved at all in recoverable form. I used my least-secure password generation algorithm anyway, but: Do you think you could add a note to the effect that users should not use passwords that they use elsewhere?
Looking into it now.
EDIT: Added this warning to the save form:
“We store the password and send it to you by email, so please do not use a ‘trusted’ password for this that you use for anything important.” (Not our design decision by the way.)
I have taken the survey.
Comment: “90% of humanity” seems a little high for “minimum viable existential risk”. I’d think that 75% or so would likely be enough to stop us from getting back out of the hole (though the nature of the destruction could make a major difference here).
What makes you think so? The main reason I can see why the death of less than 100% of the population would stop us from getting back is if it’s followed by a natural event that finishes off the rest. However 25% of current humanity seems much more than enough to survive all natural disasters that are likely to happen in the following 10,000 years. The black death killed about half the population of Europe and it wasn’t enough even to destroy the pre-existing social institutions.
We have a lot more infrastructure than Europe had at the time of the Black Death. If we lost 75% of the population, it might devastate things like the power grid, water supply and purification, etc.
We have (I think) more complicatedly interdependent institutions than Europe at the time of the Black Death. Relatively small upheavals in, e.g., our financial systems can cause a lot of chaos, as shown by our occasional financial crises. If 75% of the population died, how robust would those systems be?
The following feels like at least a semi-plausible story. Some natural or unnatural disaster wipes out 75% of the population. This leads to widescale failure of infrastructure, finance, and companies. In particular, we lose a lot of chip factories and oil wells. And then we no longer have the equipment we need to make new ones that work as well as the old ones did, and we run out of sufficiently-accessible oil and cannot make fast enough technological progress to replace it with solar or nuclear energy on a large scale, nor to find other ways of making plastics. And then we can no longer make the energy or the hardware to keep our civilization running, and handling that the best we can takes up all our (human and other) resources, and even if in principle there are scientific or technological breakthroughs that would solve that problem we no longer have the bandwidth to make them.
The human race would survive, of course. But the modern highly technology-dependent world would be pretty much screwed.
(I am not claiming that the loss of 75% of the population would definitely do that. But it seems like it sure might.)
It doesn’t feel plausible to me. You don’t need computer chips or oil to have industry and science. Industry + science would eventually progress back to modern capabilities, but probably faster due to people rediscovering old knowledge preserved here and there.
For how long?
Indefinitely, in the scenario I described—we’d have lost the technology necessary to rebuild the technology. (E.g., if abundant energy depends on one or more of { getting lots of oil, getting lots of uranium, making really good solar cells, figuring out fusion } and making any of those happen depends in turn on abundant energy.)
We built it from scratch to start with.
I think you’re confusing technology and scale. Besides, can we now finally admit peak oil was wrong?
Unfortunately, we can’t. While we’re not going to run out of oil soon (in fact, we should stop burning it for climate reasons long before we do; also, peak oil is not about oil depletion), we are running out of cheap oil. The EROEI of oil has fallen significantly since we started extracting it on a large scale.
This is highly relevant for what is discussed here. In the early 20th century, we could produce around 100 units of energy from oil for every unit of energy we used to extract it; those rebuilding the civilization from scratch today or in the future would have to make do with far less.
I am sure we can. Peak oil said we’d run out of oil Real Soon Now, full stop. The cost of oil has been rising since early XX century, as you point out, that’s not what peak oil was all about.
Again, we have confusion of technology and scale. The average cost of oil extraction is higher than it used to be. But that cost varies, considerably. If you are trying to rebuild you don’t need much oil, so you only use the cheapest oilfields (e.g. the Saudi ones) and don’t try to pave over the North Sea with oil rigs or set them up all over the Arctic.
If you go to the Wikipedia page about Peak Oil one of the first things you see will be a graph, derived from Hubbert’s 1956 paper. It shows oil production continuing to (and, looking at the graph, presumably past) year 2200. Hubbert’s paper doesn’t actually say anything much about when supply will fail to meet demand—it makes no attempt to model demand. (It does say something like “This doesn’t mean we’re going to run out of liquid and gaseous fuels real soon now, because we can make them from other more abundant fossil fuels”, presumably meaning coal.)
I’m not sure what it means to say that “peak oil was wrong”. I mean, the amount of oil on earth is in fact finite. At some point we will either run out or stop using it for other reasons; at some point before then there will be a global maximum of production (if it hasn’t occurred already). Some specific guess about when those things would happen could well have been wrong, but that doesn’t invalidate the overall picture and I’m not aware of any reason to think it even changes the timescales all that drastically.
The arguments about peak oil mostly consist of running to and fro between the motte (“the amount of oil on earth is in fact finite”) and the bailey. It’s tiring and not very useful.
Peak oil has been promising permanent—and accelerating—reductions in absolute oil production, sky-high—and climbing—prices and widespread—and worsening—scarcity leading to a variety of unpleasant social consequences since the mid-1970s. That’s 40 years of being wrong.
Well, what happened in this actual case is that I said it might turn out that rebuilding technological society after a huge catastrophe might be dependent on cheaper oil than we’d actually have, and it was to that that you replied “can we now finally admit peak oil was wrong?”.
What version of “peak oil was wrong” refutes what I said?
That wasn’t an argument against your position per se. It was more of a side lunge. Or a distraction or a pirouette or a slip-and-fall or a bête noire or a whimsy or a wibble—you pick :-)
Another possibility is that it will become possible (and cheap enough) to produce oil from other things, before it runs out. In that case it would seem reasonable to say that the peak oil theory was wrong.
It is possible to produce oil from coal. It’s not a new process, Germany used it widely during WW2 as it had little access to “regular” oil.
And, as I remarked above, when Hubbert wrote his original paper about “peak oil” (at least, I think the thing I saw was his original paper), he explicitly said that coal can be used to make oil and gas, and that therefore diminishing oil extraction doesn’t have to mean no more oil.
Peak oil refers to the moment when the production of oil has reached a maximum and after which it declines. It doesn’t say that we’ll run out of it soon, just that production will slow down. If consumption increases at the same time, it’ll lead to scarcity.
Well, that probably depends on how much damage has been done. If civilization literally had to be rebuilt from scratch, I’d wager that a very significant portion of that cheap oil would have to be used.
Oh, yes it does.
Yup, we did. But after this hypothetical partial collapse our situation won’t be the same as when we started building our technological society. In some ways it’ll be better, but in others it’ll be worse; in particular, scarce natural resources will be harder to find because we already got out all the easy stuff.
I don’t think I am. I’m saying that technological advance is much easier in a society with good infrastructure, and that that infrastructure may depend on having lots of reasonably cheap energy, and that in this hypothetical scenario we may not have. (And that getting it back might depend on those technological advances we aren’t in a position to make until we’ve got it back.)
Scale.
Is your society 7 billion or 10 million? 10 million people can rebuild much of high-tech civilization and they won’t need a lot oil to do that. And then, of course, you go into a positive feedback cycle.
How sure are you of that? Here is a contrary opinion.
He answers a different question:
In the collapse-and-rebuild scenario you don’t need to “maintain the current level” right away. For example, you don’t need to be able to immediately build contemporary computer-controlled cars. The fully-mechanical cars of the XX century would do fine, for a while. All you need to do is have enough technology to not get stuck in a local minimum and get the positive feedback loop going. That’s a much easier task.
Of course by the time you’re done with the rebuild, your 10m people will multiply :-)
I’m really not sure it is.
The Black Death destroyed the social institution of serfdom. (Most people see that as a good thing.)
I don’t think it is that easy to judge. The universities continued to exist in name, but it looks to me like they were destroyed. They switched from studying useful philosophy to the scholasticism that is usually attributed to an earlier period. The black death produced a 200 year dark age (“the Renaissance”). But the books survived, including the recent books of the Oxford Calculators, and people were able to build on them when they rebuilt the social institutions.
yes maybe. but we have to draw a baseline somewhere.
As before, I found the question on metaethics (31) to be a tossup because I agree with several of the options given. I’d be interested in hearing from people who agree with some but not all of these answers:
I’m a subjectivist: I understand that when someone says “murder is wrong”, she’s expressing a personal judgement—others can judge differently. But I also know that most people are moral realists, so they wrongly think they are describing features of the world that don’t in fact exist; thus, I believe in error theory. And what does it mean to proclaim that something “is wrong”, other than to boo it, i.e. to call for people not to do it and to shun those who do? Thus, I also agree with non-cognitivism.
I don’t agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn’t exist, while non-cognitivism holds that moral statements only express emotional attitudes (“Yay for X!”) or commands (“Don’t X!”), which can neither be true nor false. The difference between error theory and subjectivism is that subjectivists believe that some moral statements are true, but that they are made true by something mind-dependent (but what counts as mind-dependent turns out to be quite complicated).
The intended difference is something like —
“I disapprove of murder.” This is a proposition that can be true or false. (Perhaps I actually approve of murder, in which case it is false.)
“Boo, murder!” This is not a proposition. It is an act of disapproval. If I say this, I am not claiming that I disapprove — I am disapproving.
It’s like the difference between asserting, “I appreciate that musical performance,” and actually giving a standing ovation. (It’s true that people sometimes state propositions to express approval or disapproval, but we also use non-proposition expressions as well.)
I don’t understand how this difference leads to different (and disjoint / disagreeing) philosophical positions on what it means for people to say that “murder is wrong”.
If someone says they disapprove of murder, they could be wrong or lying, or they could actually disapprove a little but say they disapprove lots, or vice versa. And if they actually boo murder, that’s a signal they really disapprove of it, enough to invest energy in booing. But aside from signalling and credibility and how much they care about it, isn’t their claimed position the same?
Are you saying non-cognitivists claim people who say “murder is wrong” never actually engage in false signalling, and we should take all statements of “murder is wrong” to be equivalent to actual booing? That sounds trivially false; surely that’s not the intent of non-cognitivism.
If moral claims are not propositions, then propositional logic doesn’t work on them — notably, this means that a moral claim could never be the conclusion of a logical proof.
Which would stop us from deriving new moral claims from existing ones. I understand now. Thanks!
So, if I understand correctly now, non-cognitivists say that human morals aren’t constrained by the rules of logic. People don’t care much about contradictions between their moral beliefs, they don’t try to reduce them to consistent and independent axioms, they don’t try to find new rules implied by old ones. They just cheer and boo certain things.
It’s worth noting that there are non-cognitivist positions other than emotivism (the “boo, murder!” position). For instance, there’s the prescriptivist position — that moral claims are imperative sentences or commands. This is also non-cognitivist, because commands are not propositions and don’t have truth-values. But it’s not emotivist, since we can do a kind of logic on commands, even though it’s not the same as the logic on propositions.
https://en.wikipedia.org/wiki/Non-cognitivism
https://en.wikipedia.org/wiki/Imperative_logic
(“Boo, murder!” does not logically entail “Boo, murdering John!” … but the command “Don’t murder people!” conjoined with the proposition “John is a person.” does seem to logically entail the command “Don’t murder John!” So conjunction of commands and propositions works. But disjunction on commands doesn’t work.)
I was similarly torn between answers and i’m glad you brought this up. I think substantive realism is the most useful perspective here, but i clicked constructivism in an attempt to honor the spirit of the question, even if it was kindof a technicality.
For me, the hard-to-express part is that the universe cares nothing about human ethics, but it’s fine for us (humans) to view our shared utility function as objective.
I treat a moral sense similar to how I’d treat a “yummy” sense. Your nervous system does an evaluation. Sometimes it evaluates as yummy, sometimes as moral.
But the moral sense operates with a different domain and range than yummy, in that it has preferences between behaviors, and preferences between preferences about behaviors,… and implies reward and punishment up the level of abstraction in that scale of preferences.
I opted for Subjectivism as the best match.
Error Theory just seems rather dumb. I think I get the sense in which you mean it, which seems like a valid observation about the error of objectivists, but I think you’re mistaking the definition here. It said ” moral rightness and wrongness aren’t features that exist”, but they do, regardless of confusion that moral objectivists may have about them. They exist to you, right?
Non-cognitivism seems like a straw man moral subjectivism. There is a lot more to it than just “boo”. There is structure to the behavioral preferences and the resulting behavioral responses.
You are not the first to draw this parallel.
[EDITED to add:] Really fun paper, by the way.
I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.
Subjectivism implies that morals are two-place concepts, just like preferences. Murder isn’t moral or immoral, it can only be Sophronius!moral or Sophronius!immoral. This means Sophronius is probably best equipped to judge what is Sophronius!moral, so other people’s judgements clearly aren’t as good in that sense. But if you and I disagree about what’s moral, we may be just confused about words because you’re thinking of Sophronius!moral and I’m thinking of DanArmak!moral and these are similar but different things.
Is that what you meant?
Everything you say is correct, except that I’m not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I’ve always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
That makes no sense to me. How is it different from saying nothing at all is subjective? This seems to just ignore the definition of “subjective”, which is “an attribute of a person, such that you don’t know that attribute’s value without knowing who the person is”. Or, more simply, a “subjective X” is a function from a person to X.
I believe that’s where the whole CEV story comes into play. That is, Eliezer believes or believed that while today the shared preferences of all humans form a tiny, mostly useless set—we can’t even agree on which of us should be killed! - that something useful and coherent could be “extrapolated” from them. However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated, or why all humans could agree on an extrapolation procedure, and I don’t believe it myself.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health. It does not preclude a science of morality.
Unfortunate, but understandable as that’s a lot harder to prove than the philosophical argument.
I can definitely imagine that we find out that humans terminally value other’s utility functions such that U(Sophronius) = X(U(DanArmak) + …, and U(danArmak) = U(otherguy) + … , and so everyone values everybody else’s utility in a roundabout way which could yield something like a human utility function. But I don’t know if it’s actually true in practice.
I don’t think these two are really different. An “opinion”, a “belief”, and a “preference” are fundamentally similar; the word used indicates how attached the person is to that state, and how malleable it appears to be. There exist different underlying mechanisms, but these words don’t clearly differentiate between them, they don’t cut reality at its joints.
How is that different from beliefs or normative statements about the world, which depend on what opinions an individual holds? “Holding an opinion” seems to cash out in either believing something, or having a preference for something, or advocating some action, or making a statement of group allegiance (“my sports team is the best, but that’s just my opinion”).
Maybe you use the phrase “just an opinion” to signal something people don’t actually care about, or don’t really believe in, just say but never act on, change far too easily, etc.. That’s true of a lot of opinions that people hold. But it’s also true of a lot of morals.
You can always make a science of other people’s subjective attributes. You can make a science of people’s “just an” opinions, and it’s been done—about as well as making a science of morality.
I’m still not certain if I managed to get what I think is the issue across. To clarify, here’s an example of the failure mode I often encounter:
Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it’s objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: No, we cannot criticize other cultures, because morality is subjective.
The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice (“morality is subjective”) and it not being possible to make moral claims in principle (“Morality is subjective”). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).
I think it’s actually very similar to the error people make when it comes to discussing “free will”. Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can’t punish criminals because they have no free will (in the sense of their preferences affecting their actions).
I understand now what you’re referring to. I believe this is formally called normative moral relativism, which holds that:
That is a minority opinion, though, and all of (non-normative) moral relativism shouldn’t be implicated.
Here’s what I would reply in the place of your philosopher:
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: It’s considered wrong by many people, like the two of us. And it’s considered right by some other people (or they wouldn’t regularly do it in some countries). So while we should act to stop it, it’s incorrect to call it simply wrong (because nothing is). But because most people don’t make such precise distinctions of speech, they might misunderstand us to mean that “it’s not really wrong”, a political/social disagreement; and since we don’t want that, we should probably use other, more technical terms instead of abusing the bare word “wrong”.
Recognizing that there is no objective moral good is instrumentally important. It’s akin to internalizing the orthogonality thesis (and rejecting, as I do, the premise of CEV). It’s good to remember that people, in general, don’t share most of your values and morals, and that a big reason much of the world does share them is because they were imposed on it by forceful colonialism. Which does not imply we should abandon these values ourselves.
Here’s my attempt to steelman the normative moral relativist position:
We should recognize our values genuinely differ from those of many other people. From a historical (and potential future) perspective, our values—like all values—are in a minority. All of our own greatest moral values—equality, liberty, fraternity—come with a historical story of overthrowing different past values, of which we are proud. Our “western” values today are widespread across the world in large degree because they were spread by force.
When we find ourselves in conflict with others—e.g. because they throw acid in their wives’ faces—we should be appropriately humble and cautious. Because we are also in conflict with our own past and our own future. Because we are unwilling to freeze our own society’s values for eternity and stop all future change (“progress”), but neither can we predict what would constitute progress, or else we would hold those better values already. And because we didn’t choose our existing values, they are often in mutual conflict, and they suffer evolutionary memetic pressure that we may not endorse on the meta level (i.e. a value that says it should be spread by the sword might be more memetically successful than the pacifistic version of the same value).
I don’t understand the apparent assumptions behind the questions about genetic modification of children. Presumably they were chosen to represent different moral / legal / social / mental categories of modifications, but the categories don’t feel entirely natural to me.
Why is “reducing the risk of schizophrenia” grouped with “improvements” rather than “preventing heritable diseases”? What is different about schizophrenia from all other heritable diseases? I don’t know to what degree it’s in fact heritable, but since we’re talking about genetic modifications, only the heritable component would be addressed anyway.
And why are “improvement purposes” implicitly defined as disjoint from “cosmetic reasons”? What makes intelligence a legitimate improvement but height merely “cosmetic”? Is everything visible (i.e. cosmetic) therefore not in the improvement category? I feel confused and might be missing the intent of this division.
Cosmetic feels like a distinct category from intelligence enhancements. One affects the actual personality and mind of the child, and the other is just their body. You can be ok with one and not the other.
I find it hard to believe that significant changes to e.g. height, weight, or muscle tone, present since birth, wouldn’t affect the personality and mind of a person. There’s a big difference between growing up short and tall, and between being weak and athletic. And there’s a really big difference between growing up ugly and beautiful.
Well FWIW I voted differently for intelligence enhancements than cosmetic enhancements on the survey. I’m probably not the only one, so separating them makes sense.
Can you explain your reasoning, please?
I’m not Houshalter, but: beauty is mostly a positional good (if everyone in the world were one notch less attractive, nothing would be terribly different) whereas intelligence is not (if everyone in the world were one notch less intelligent, it would almost certainly be really bad for the world’s economic and technological progress).
[EDITED to add:] … And therefore if you use a “what if everyone did it” criterion for distinguishing good actions from bad, intelligence enhancement looks distinctly better than attractiveness enhancement.
This argument works in the short term but I’m not sure if it works in the long term.
There’s probably a limit or at least diminishing returns to beauty, because there are limits to how symmetrical a face is, how large eyes are, how shiny hair is, how tall a person grows, and what is achievable via genetic engineering.
If everyone in the next generation is genetically engineered for beauty, the amount of variation should decrease. That would be good, in part because today we suffer from beauty superstimuli from seeing media of the most beautiful people in the world. (Past generations don’t matter because older people can’t compete on beauty anyway.)
Also, “what if everyone did it” doesn’t work in the real world; you have to consider defecting strategies. And a single defector that enhances their beauty would be very successful. The only stable equilibrium is for everyone to enhance.
The problem is cost, including opportunity cost and tradeoffs inherent in genetic optimization for a certain purpose, all being invested towards a goal with diminishing returns. But I would at least support genetic enhancements of beauty that don’t come at the cost of other genetic modifications, merely at the cost of dollars.
I don’t know if this is an opinion I feel strongly enough about to argue on the internet about it. That just how I answered on the spot when the survey asked.
Something about cosmetic enhancements feels just wrong and creepy, in a way that intelligence enhancements don’t. Higher intelligence is objectively good. Our society would benefit from an increase in IQ. Intelligence is what distinguishes us from animals and lets us do all the cool things we do.
But increasing attractiveness wouldn’t make society any better. If anything it would make it worse, by creating obvious visual distinction between the modded and unmodded, which can’t end well.
And it just feels creepy. Reminds me of anecdotes about the Nazis wanting to create a race of blonde hair blue eyed people, or the image that circulates occasionally on how Korean beauty stars all look identical.
Do you also think that society should be devoid of all forms of art? Or perhaps, technically-enhanced art? (leaving you with perhaps cave paintings and little else.) After all, these things do not materially improve society either.
If you make each house in a city to be more beautiful, no one gets an advantage, but you still get a more beautiful city.
I value diversity, so it would be a loss if all the modified people get similar, but I don’t think it’s going to happen any more than all the art becoming similar.
Taken it, but there were a couple of questions I thought lacked flexibility (well, more than a couple, but I don’t really care for political self-identification etc.)
Suppose I personally have an income too small to donate, but my husband found the money to, and did? What do I answer then?
I don’t understand what this question is asking. Can someone please clarify?
Question 74, What would you want from a successor [LW 2.0]? More / same / less: Intense Environment.
What is an “intense environment”? What sites have it?
Also, more/same/less than LW has today, or than LW had at its peak? (I answered those questions assuming the former.)
I suggest this be posted to Main. I go long stretches without checking discussion, and just happened to find the survey here, but I subscribe to the Main RSS feed.
Moved to main and promoted.
If you have to leave the computer in the middle of the survey, the software will punish you by throwing away your already completed answers. Really sucks after having completed about 100 of them. :(
What the hell was the purpose of checking whether someone was “inactive for too long”? So what, they were inactive, now they are active again, what’s the big deal? Sometimes real life intervenes.
(Problems with connections happen too; I have a crappy wi-fi connection that I often have to restart several times a day. But that wasn’t the case now. Also, why can’t the software deal with disabled cookies? Calling root@localhost and waiting for an explanation...)
EDIT: If you happen to find yourself in a similar situation, use the e-mail mentioned in the article. As long as you remember enough data to uniquely identify your half-written response, the situation can be fixed.
The software needs a way to track who was responding to which questions. That’s because many of the questions relate to one another. It does that without requiring logins by using the ongoing http session. If you leave the survey idle then the session will time out. You can suspend a survey session by creating a login which it will then use for your answers.
The cookies thing is because it’s not a single server but loadbalanced between multiple webservers (multiactive HA architecture). This survey isn’t necessarily the only thing these servers will ever be running.
(I didn’t write the software but I am providing the physical hosting it’s running on.)
Hi.
I have no idea why that happened and I’m really sorry. It’s definitely not supposed to. root@localhost isn’t a real email address it’s just there to stymie system ‘error’ messages we were receiving that were bogus.
The real mailing address you want is jd@fortforecast.com. We’d love to talk to you.
Sent an e-mail, thanks.
I like the new format. Some notes:
In some cases I’d have preferred “other” options. Some I left out.
The “enter in field below” was a bit unclear because it only appears on clicking the option.
The questions on donating to charity only relate to donating money to charity. Some people who have sufficient free time but little disposable income donate time to charities instead. I have seen reports that donating time over money is more common amongst students and people of low income, who seem to be a smaller proportion of the LW diaspora, but it may be interesting to compare donated time vs money on future surveys.
In my experience donating one’s time is also seen as being extra keen on that cause, presumably because it requires more effort, and there are certain causes that consider time more valuable than funds (eg local environmental causes, where hiring sufficient people to remove invasive weeds from a local swamp is more expensive than holding a big weeding exercise on a Saturday afternoon).
This is a really good point. It’d make an especially interesting question set because it would give us some idea of how seriously LWers take the comparative advantage idea when it comes to charity, as measured by their actions.
I was on the slack review team, apparently. Will my data be thrown out or should I take it again?
If you have successfully pushed submit your data has been counted. There were some spelling errors that were fixed but the substance of the survey was not changed.
Could you elaborate on what you mean? If you’ve already taken the survey prior to this post your results were counted and you don’t need to take it again.
it’s conceivable that data collected before alterations were made to the survey would be invalidated, or considered a confound for answer/no answer data, and thrown out. It’s also conceivable that many additional questions were added, in which case retaking the survey would be valuable.
But I guess I wont then.
I mentioned this in previous years but I’ll bring it up again: I had to skip “odds of supernatural.”
Without examples, it seems like an easy “0,” because it really sounds to me like “odds of something false.”
It strictly includes God, however, and I would answer “odds God is supernatural” as also 0.
So it is unclear whether I should answer for odds of God (the rest is zero to me, so God + 0 = God), which might be 60-80%, or odds of supernatural given my understanding of God (Superman theist. There’s a provident entity, but trusting people who used 40 to mean “a whole bunch” with understanding infinity seems silly. It’s just a lot cooler than humans. And part of nature.), which is zero.
Again, skipped, and I may be the only Superman theist here so don’t change it for just one person, but seemed worth repeating since there’s a new person at the survey’s helm.
I think it was question 137 that assumed that a blank would indicate an infinite in the future response.
Bad design for interpreting the response. I ended up not having an opinion on the answer, but my lack of opinion gets interpreted as a particular opinion.
IIRC, that question was added to the survey later.
I don’t remember even seeing that.
Typo question 42
Is there a deadline?
Yes, all responses should be turned in by May 1st.
Is there a responses per IP limit? I just had my family over and had them all complete the survey on my computer (all semi-converts), but if I only get one submission I’ll take it over so I get the vote :)
no that should be fine.
Thanks, and thanks for getting this all together!
I notice that the fact that I can’t see all the questions on one page makes me feel more averse towards taking this survey. It makes me feel like there’s a potentially infinite amount of content to be answered, lurking out of sight, whereas if it was all one page I’d always be clear on how many more questions there were left.
This format also makes it hard to answer questions out of order, skipping a hard one until I’m done with all the easy ones.
this is a trade off that we make for partially completed survey data. On the one hand; the total number of questions was mentioned at the start (maybe could have been highlighted more), and there is a progress bar at the top of each page. I agree that this is not idea; does the trade off make more sense now?
Not sure what you mean by that?
But thanks for mentioning the progress bar, I didn’t notice it at first. That helps somewhat.
we get partially completed data from every page submitted; if the survey is not completed.
It took me a while to notice the progress bar.
Question 17 seems to lack an “other” category or at least an
Academics (on the research side)
Box.
...and Teaching (non-academics), too.
I just remembered that I still haven’t finished this. I saved my survey response partway through, but I don’t think I ever submitted it. Will it still be counted, and if not, could you give people with saved survey responses the opportunity to submit them?
I realize this is my fault, and understand if you don’t want to do anything extra to fix it.
Someone said elsewhere in this thread that if you stop in the middle of the survey, it does record the answers you put in before quitting.
Great! Thanks!
AI reading LessWrong—will we find out soon?
Question number 90. Have you ever practiced not letting an AI out of the box? Choose one of the following answers Yes with Eliezer’s Ruleset, Yes with Tuxedage’s Ruleset, Yes with a different ruleset, No but I’ve been the AI, No
Option “No but I’ve been the AI” is of particular interest to me. I’m not native English speaker and I don’t know how strongly “ve been” implies that the state is changed as of now.
My guess is that the survey tries to find out if some of the following is true:
1) There was AI who already became “natural” intelligence (maybe human, maybe Extra-Terrestial)
2) There was AI who had access to Time machine to read LessWrong in our present time.
Any other guesses?
P.S. I’ve searched LessWrong site for “No but I’ve been the AI” and found nothing, so the issue has not been discussed yet and I have not noted humor in questions of the survey, so I conclude interest of the question is genuine.
Nope, it’s referring to “AI-box” experiments where one (human) participant is role-playing the AI role. No actual AI participation is in any way implied here.
Thank you.
I would expect rationalists be more careful with words, why not phrase as played AI role to be clear?
Considering
per last survey, there is significant probability I was AI yesterday.
Sorry about that:
no reason. It made sense to most of us; will keep it in mind for the future.
Is there an easy way of printing one’s replies (or saving them permanently for offline use), other than either:
Printing out each separate page;
Waiting for all the answers to be published and extracting one’s own row (though that’s suboptimal since the questions will presumably be absent and also, one has to wait)?
In the old survey/census I could print (to pdf) the entire form in one go.
Thanks for organising the survey!
Oh I’m sorry about that. It’s actually an option in the software but I didn’t turn it on because I couldn’t imagine anybody would use it. ^^;
Fixing now.
EDIT: Should be an option now when you complete the survey, thanks!
Thanks! (Sorry for the late reply.)
I’m always confused by the “spiritual atheist” question, that is, the “spiritual” part. Can anyone who selected this option try to explain what they meant when they selected it?
Just define “spiritual” as something other than “supernatural”. Life contains aspects of a numinous or sacred quality, even if there is no Absolute, supernatural basis for that quality.
I have an affinity for some of the teachings of Buddhism and Christianity. If someone asks me at the bar, I’d say I’m “spiritual, but agnostic and ultimately not religious”...or something like that.
In my experience, definitions get tricky when dealing in the atheist/agnostic/ignostic space.
Atheist Buddhists can label themselves that way, but there are a variety of different people.
I did not select that option, but I know people that identify this way. The sorts of people that do vary considerably, from an atheist who believes in ghosts or spirits, to people that believe that we can have telepathic and/or empathic connections and can achieve this through eg meditation etc. People that believe in “magic as a form of willpower making things change in the real world” consider themselves spiritual, but atheist. etc etc.
I think it sometimes just means “I’m an atheist, but I feel a sense of awe when contemplating the Grand Canyon or Maxwell’s equations or the way some people sacrifice their lives for others, I don’t particularly enjoy being rude about religion, and I find Richard Dawkins a bit annoying”.
It wasn’t clear: is this survey intended for everyone?
I ask because so many of the opening questions only make sense from a US perspective. I realise I can just skip them but it was giving me the feeling I was taking part in something that wasn’t aimed at me.
I have no SAT score, for instance, and as it would’ve been taken something like 28 years ago, I couldn’t possibly remember what it was now if I had. Who has an IQ test? Is that normal?
The survey is intended for all people who are in the lesswrong and it’s diaspora community. If you saw this post then it’s probably for you to take.
A lot of our community are in the US; and a significant enough number of people also have SAT’s and IQ tests that we can estimate the average IQ of our population. I personally also have neither SAT or a recent IQ test to quote. You can skip any question you like.
Do I read last years answer graph right that 15% didn’t answer the first control question correctly?
Is there a list of the blogs / novels proposed in the “Have you read any of these...” section?
I’ve never heard of mostly of them and I would like to explore them.
we will definitely release them; maybe when the survey is over will be better.
Thanks for putting this together, and I will share it through Intentional Insights channels
With whom? My understanding is that this is intended to be a survey of people who either are or have been LW participants.
This is a diaspora survey, for the pan-rationalist community.
Hence “have been”. Maybe I’m misunderstanding, but usually what “X diaspora” means is “people who have been X but have now moved elsewhere”.
Or have ancestors from X.
I would probably consider a regular SSC commenter to be part of the LW diaspora even if they have never personally been a regular LW commenter. (Not so sure about InIn, as AFAICT Gleb Tsipursky hadn’t been a LW regular before founding it.)
Having more data is good regardles of the semantics.
Not if the “data” is noise.
Most of those who haven’t ever been on Less Wrong will provide data for that distinction. It isn’t noise.
If what you want to know is “what characteristics do LW participants and ex-participants have” then you want to survey LW participants and ex-participants, and responses from other people will not help to answer that question. And if their survey answers don’t clearly distinguish the other people from the participants and ex-participants, they will make the answers less useful. I forget how clearly the questions ought to make it possible to distinguish, but given that people make mistakes and don’t fill everything in I suspect that in practice they distinguish much less than perfectly.
I don’t think that’s true. Having a control group quite often does help you to know more.
What transfuturist said :-)
When will the survey results be published?
They’re available here.
Any way to get a list of the questions asked here without me going through the survey again and adding bad data?
Was interested in some of the blogs / book related questions for sources.
yes, they will be published when we close the survey at the end of this month. If you want it sooner than that you can PM me.
Perfect, thanks. Suppose a couple days wait can’t hurt ;).
I feel like some questions could use a way to provide an explanation for the answer, or the “other” option. Like, for example, my answer for the immigration question would be “no restriction on immigration for educated and culturally compatible people, extreme restrictions for non-educated and culturally incompatible ones”, but I ended up putting in the “no options” one, as it was more like the average between “no restrictions” and “strong restrictions”
There is possibility to skip the singularity question, since skipping is chosen to mean “very unlikely”. Instead, choose some year like “-1″ or “0”
Based on the graph, ask 20 or less questions.
And please keep the survey access link higher up.
Rather, ask the most important questions first. If you don’t finish the survey, it still registers your answers to the first questions.