Stop Using LessWrong: A Practical Interpretation of the 2012 Survey Results
Link to those results: http://lesswrong.com/lw/fp5/2012_survey_results/
I’ve been basically lurking this site for more than a year now and it’s incredible that I have actually taken anything at all on this site seriously, let alone that at least thousands of others have. I have never received evidence that I am less likely to be overconfident about things than people in general or that any other particular person on this site is.
Yet in spite of this apparently 3.7% of people answering the survey have actually signed up for cryonics which is surely greater than the percent of people in the entire world signed up for cryonics. The entire idea seems to be taken especially seriously on this site. Evidently 72.9% of people here are at least considering signing up. I think the chance of cryonics working is trivial, for all practical purposes indistinguishable from zero (the expected value of the benefit is certainly not worth several hundred thousand dollars in future value considerations). Other people here apparently disagree, but if the rest of the world is undervaluing cryonics at the moment then why do those here with privileged information not invest heavily in the formation of new for-profit cryonics organizations, or start them alone, or invest in technology which will soon develop to make the revival of cryonics patients possible? If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.
There is also a question asking about the relative likelihood of different existential risks, which seems to imply that any of these risks are especially worth considering. This is not really a fault of the survey itself, as I have read significant discussion on this site related to these ideas. In my judgment this reflects a grand level of overconfidence in the probabilities of any of these occurring. How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed? I generously estimate 5% have made any such preparations.
I also see mentioned in the survey and have read on this site material related to in my view meaningless counterfactuals. The questions on dust specks vs torture and Newcomb’s Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.
My judgment of this site as of now is that way too much time is spent discussing subjects of such low expected value (usually because of absurdly low expected probability of occurring) for using this site to be worthwhile. In fact I hypothesize that this discussion actually causes overconfidence related to such things happening, and at a minimum I have seen insufficient evidence for the value of using this site to continue doing so.
Look, while nothing you’re saying here is particularly objectionable in my opinion (not that I agree, it’s just that the disagreement is not violent), I’ve just gone over your comment history and they were all like “I don’t believe I’ve gained any benefit from reading this post”, “I don’t think there’s much worth in discussing this”, “I’m not very convinced by the arguments made in this post” etc. It goes like this for about half a year.
Which gets me thinking, okay, so you didn’t like LessWrong from the very beginning—but then why spend time on showing this to everybody? It doesn’t make sense to make an account just to periodically express your dissatisfaction with the content posted—I mean, when I believe a website to be boring and useless, I prefer not to bother with it and click the red X instead. Do you do this for every other site you stumble upon and get to dislike? Because that would be quite a lot of time wasted on places that just aren’t worth it.
Presumably aceofspades thinks getting people to stop reading LW is a valuable use of his time (e.g. because it frees up the time of smart and talented people who could be doing more productive things) and/or that his leaving would be an unacceptable example of evaporative cooling.
That or, y’know, someone is wrong on the internet.
Bye!
Apparently they have spent a good six months leaving similar comments throughout the site.
There are probably several things where I would broadly agree with you, however your post would be much better without the condescending tone. And perhaps without all the non sequiturs:
Why? If people don’t believe that cryonics will work, you can’t sell it to them for a lot of money even if they are wrong. (Disclaimer: I haven’t signed for cryonics.)
If you believed that there is going to be a nuclear war in 90 years, would you start buying the food and preparing the shelter just now?
The risks listed in the survey results were pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, economic/political collapse, pandemic (natural), nanotech, asteroid. Few of them could be short term catastrophes with critical first few weeks that can be survived in a shelter, but not necessarily. If we are speaking about a disaster wiping out 90% of the global population or more, it’s pretty good to assume that lot of people are thinking of an event which renders Earth unlivable, with a shelter or without it.
People can prefer death to living in a post-apocalypse world. (Or prefer “normal” pre-apocalyptic life and then death to life spent in preparation for the apocalypse and survival.)
The question was “which disaster do you think is most likely...”. Therefore, if 23% answer bioengineered pandemics, it doesn’t imply that 23% of people actually consider bioengineered pandemics probable. It can mean that it is less improbable than the rest of the list.
That no more than 5% of LW readers are preparing a shelter (likely a correct guess) is an argument for what, exactly? It can be evidence that the general LW opinion is actually closer to yours than you seem to believe, or it can be evidence that people are procrastinating, but it certainly doesn’t imply “grand level of overconfidence in the probabilities of any of these [catastrophes] occurring”.
(Disclaimer: I don’t especially fear future global catastrophes and moreover don’t think that we can predict them significantly better than by random guess.)
Relevant to what? It seems that those discussions were intended as illustrations of theoretical problems with common utilitarian and decision-theoretic intuitions. Learning that one’s intuitions have bounded domain and don’t work well in extreme unrealistic scenarios isn’t perhaps a life changing achievement, but it is at least interesting. Perhaps not interesting to you, but not interesting to you and worthless are different things. (Disclaimer: I don’t think that having correct answer to Newcomb and dust specks is going to be practically important in and of itself.)
I think you’re relying on a whole slew of assumptions here which obviously do not hold.
What existential threat would a fallout shelter help with? There’s a long list of existential threats which one could be worried about, you know… For someone enamored of economic judgment, you don’t seem to be doing it very well.
Possibly, but criticizing people for not doing cryonics startups or digging shelters is not going to prove such assertions.
You’ve never caught yourself in the act of falling for a cognitive bias detailed on this site?
I’m curious about what other web sites satisfy similarly high expectations. No snark intended.
RE: Cryonics—that particular *reverse Kool-Aid doesn’t come in my flavor yet, but I enjoy that a notable minority are willing to put their money where their mouth is. It gives discussions of futuristic edge-cases a novel weight.
“Kool-Aid” is now a term that can be used to mean not committing suicide?
You have a point! Updated for correctness and humor.
(Have two more karma points on the grandparent. I respect updates!)
“The market can stay irrational longer than you can stay solvent.” (John Maynard Keynes)
Also, at which amount of money would you be indifferent between either being put to death and receiving that much (to do as you please with, as set in your last will if you wish), or staying alive and not receiving that amount of money?
5.8 million 2007 US dollars.
The proposed dollar value of a human life to the economy and the amount where a person would be indifferent to dying and being able to dispose of that amount as they saw fit in their will, or living and not receiving it, are not the same.
The value of a person’s life to the government and/or economy is more like the value of that person’s life to themself, minus all consumer surplus.
VSL isn’t a measure of value “to the economy,” it’s a measure of the value people place on risks to their own lives, relative to other consumption choices they could make. It maps on to things like people’s willingness to pay for safety features in cars, trade wages for job risk, and so forth.
However, there is still a wedge between VSL
A person who would accept a 50% risk of death in exchange for a billion dollars (to spend on hedonism) in the event of survival could be unconcerned with the fates of her heirs or any other uses for an estate after death.
Thanks for the correction.
I’m a bit surprised though. Value-to-the-economy may not be a very good proxy for the value of a human life, but at least it’s a coherent one, whereas I would be fairly shocked if the amount people in general were willing to pay to mitigate risks to their life turned out to be coherent on a basis of money per amount risk.
To take one of the metrics from the linked page
I’d be willing to bet good money that if you performed such a survey, and another survey in which you posited a certain number of deaths per year due to terrorism, and asked how much money tax ought to go to fighting terrorism, that the extrapolated value that they assign to mitigating terrorist risk would be inconsistent with their stated value of home safety.
Certainly people’s “revealed preferences” do not appear to indicate that they’re consistent according to such a metric.
Newcomb’s problem and Specks vs. Torture...yes...the theory behind them, used to program AI...not so much. Eliezer is generous enough to write in English for those who don’t speak C++/pseudocode/math.
This post was pretty useless, to me, by the way. I’ve heard these very same opinions before, on this very website (put much more politely, eloquently, and, in some cases, even better argued for), so this post doesn’t affect my beliefs. Downvoting.
Edit: By the way, I’m here mostly because I enjoy spending time here. I didn’t start reading this website to learn about existential risk OR cryonics OR Newcomb’s problem.
While it’s a fair point that this forum has a lot of noise and nonsense, the amount of insights not easily available elsewhere easily justifies spending some of my time here for me. It only takes a moment to ignore/downvote posts you don’t care for, anyway. YMMV.
It’s not just the material there, but the references to material elsewhere that pay dividends.
I came here out of interest towards references to Jaynes and Korzybski. People who take those guys seriously are ahead of the curve, IMO. So they also take Cialdini and Kahneman seriously. And Hugh Everett. And some of them, Mencius Moldbug.
All guys with valuable insights.
I don’t share the fascination over Newcomb problems either. So I don’t read them. End of problem. Seems like the OP is busy looking for what he doesn’t value, instead of what he does. That’s too bad for him. This place is a nexus of a lot of good ideas.
Doom avoidance and death avoidance evidently go together.
Belief in big technological advances over the next century or so would bump up both the chance of some kind of technological catastrophe and of cryonic revival being possible. On the other hand, a higher estimate of (cryonics-disrupting) catastrophe should reduce the expected payoff of cryonics.
Yeah, because the rest of the internet only discusses really relevant stuff, such as whether Wikipedia articles should write “31 December” or “December 31”. (SCNR.)
… Yay for more irrelevance?
This is a horrible line of argument if you ever set out to improve a state of affairs. It’s not a reason for us to be happier about the situation; it’s not the right attitude towards improving ourselves; all it ever does is paint a more dismal picture of the world than we previously imagined.
My point is “Doing X is useless, so I’ll stop doing X” is only a valid argument if whatever you’re going to do with your time instead of X would be less useless than X, and the OP hasn’t demonstrated that this is the case, like, at all.
Huh. I don’t think that anyone needs to demonstrate that LW isn’t the single best thing the internet (or life in general) has to offer; I thought we could all agree upon the fact that obviously there are loads of better uses of one’s time other than browsing LW. Given the wide range of useful things to do offered by life, it’s always a good idea to stop doing something you believe to be useless. It’s not like we have a choice only between several ways of wasting your time, some of which are more wasteful than the other.
Unless, of course, the matter at hand is choosing an optimal way to waste your time, given that you’re prone to wasting it anyway. And even in this case you can probably find other ways to spend your time—which feel like leisure—that are going to benefit you more in the long run.
Honestly, your reply looks to me like you’re going on the defensive and bringing up the lowest common denominator as an excuse not to improve.
(Disclaimer: I care less about the truth of the premises (whether LW is or not a waste of time) than about the soundness of the argument and the attitude used to address the matter.)
My comment was supposed to be snarky (hence the “SCNR”), though I didn’t intend the connotation that we needn’t try to improve LW. Anyway, I’m surprised that my clarification has been upvoted this much (+4 as of now).