Maybe I’m just being habitually contrarian here for no good reason, but it seems to me that for a supposedly “rationalist” community, people here seem to be far too willing to accept claims of LessWrong exceptionality based on shockingly weak evidence. Group-serving bias is possibly the most basic of all human biases, and we cannot even overcome that little?
Claiming that your group is the best in the world, or among the best, is something nearly every single group in history did, and all had some anecdotal “evidence” for it. Priors are very strongly against this claim, even after including these anecdotes.
Yet, in spite of these priors, the group you consider yourself member of is somehow the true best group ever? Really? Where’s hard evidence for this?
I’m tempted to point to Eliezer outright making things up on costs of cryonics multiple times, and ignoring corrections from me and others, in case halo effect prevents you from seeing that he’s not really extraordinarily less wrong.
Yet, in spite of these priors, the group you consider yourself member of is somehow the true best group ever?
You made up this ‘true best group ever’ idea yourself. “Best at a highly specific activity that is the primary focus of this group” is an entirely different claim.
I’m tempted to point to Eliezer outright making things up on costs of cryonics multiple times, and ignoring corrections from me and others, in case halo effect prevents you from seeing that he’s not really extraordinarily less wrong.
Eliezer doesn’t have all that much of a halo. People disagree with him and criticise him incessantly. Sometimes deserved, sometimes not. Most times I have seen Eliezer accused of having a halo effect have been when Eliezer disagrees with them on a particular subject and it happens to be the case that the majority of others here do too. Acknowledging that those who disagree with you may be doing so independently based on their own intellectual backgrounds is not nearly so psychologically rewarding as dismissing them as blind followers.
point to Eliezer outright making things up on costs of cryonics
Citation needed. Please do. I pay those costs out of pocket, they can be verified with my insurance agent if need be, and I should very much like to know what on Earth you think you are talking about.
Eliezer reported what he (Eliezer) actually currently pays per year for term life insurance ($180) and his membership with the Cryonics Institute ($120). This is relevant for youngish people worried about the effect of cryonics on their near-term cash flow. Since he is buying term life insurance, when he renews it (probably after 20 years) he will have to pay higher premiums or have accumulated savings for the cost. The Cryonics Institute is also the cheapest service.
Taw said that this distracts from the total net present value of the stream of premium and membership costs, which has to be close to the net present value of just saving up to pay for the cryonics out of pocket (~$50,000 for CI in a distribution centered decades into the future) plus membership fees. Someone thinking about the tradeoff between cryonics and bequesting wealth to their kids or to charity would worry more about this number. Taw then says that Eliezer is “lying” for giving his current costs rather than this number.
However, that NPV is not the nominal amount of a payout decades into the future. A youngish person can get whole life insurance (where premiums do not increase with age). 24 year old User:AngryParsley pays $768 per year for a $200,000 payout life insurance policy. Over 50 years he will pay $38,400 in premiums, which will be invested by the insurance company (which expects to profit by winding up with more than $200,000 by the time of payout, on average).
There is an additional complicating factor when talking about cases decades into the future that doesn’t arise in Eliezer’s situation (youngish person wanting protection for the next few decades, with expectation of accumulating wealth over time), namely inflation in cryonics costs, but a policy such as AngryParsley’s leaves plenty of margin for that.
There is an additional complicating factor when talking about cases decades into the future...namely inflation in cryonics costs
What? Why expect this to happen? Wouldn’t cryonics groups plan for this? They do explicitly say how much money is required to be set aside via insurance for people to join, while that could change, why expect them to renege on their promises (contracts? I’m not too familiar) to preserve people for the previously set amount of money?
I wouldn’t trust a business that didn’t plan for changes in the cost of its raw material commodities to so much as make ice for a lemonade stand, much less freeze people. A claim like yours should have some clarification.
When I talked to Alcor, they said that they had raised the cost to join for new members several times, but had never increased the costs for existing members. They also said not to take that as a guarantee that they would never raise the costs for existing members, because they wouldn’t guarantee that.
There is a big difference between something being ‘the best group ever’ and being ‘an easier shortcut to rationality than digging through philosophical writings the old-fashioned way’, which is how I interpreted this post. There is a community component to LessWrong that obviously isn’t present in old books, but I don’t think that’s paramount for most people. For me, in the beginning, the Sequences were just a good way to read about interesting ideas in small, digestible chunks during my breaks at work. Now it’s a bit more than that; LessWrong gives me a chance to post my ideas where they’ll be criticized by people who don’t have any social-etiquette reason not to tear apart my arguments. But there’s a big difference between a group being the optimum, the best any group of its kind could be, which LessWrong obviously isn’t...and between being the best out of all the options in a limited area, which is more what this post is claiming (I think).
There is a big difference between something being ‘the best group ever’ and being ‘an easier shortcut to rationality than digging through philosophical writings the old-fashioned way’, which is how I interpreted this post. There is a community component to LessWrong that obviously isn’t present in old books
Reading books never was a good way to learn rationality. You need to learn it in practice, through discussion and debate, and you can do that in the context of mainstream philosophy because mainstream philosophy has its blogs and NGs too. (of course it doesn’t have a “community” with a leader, a set off canonical works and a number of not-very provable doctrines everyone is supposed to subscribe to—and it’s better for it).
it seems to me that for a supposedly “rationalist” community, people here seem to be far too willing to accept claims of LessWrong exceptionality based on shockingly weak evidence.
Claiming that your group is the best in the world, or among the best, is something nearly every single group in history did, and all had some anecdotal “evidence” for it. Priors are very strongly against this claim, even after including these anecdotes.
To reply honestly to this, I think that LW is (close to) superlative in some dimensions. It’s just that when people try to tell the community that there’s a bunch of other more important dimensions that it sucks at, people get angry and shoot the messenger.
I think that LW is (close to) superlative in some dimensions.
I agree. I haven’t found another online forum which I prefer. I agreed with taw because (a) I think that some people here do attach a halo to LW, viewing it as “The Way” in some generalized sense and (b) people forget that a fair portion of what appears on LW is well known within certain circles (c.f. Don’t Revere The Bearer Of Good Info ).
It’s just that when people try to tell the community that there’s a bunch of other more important dimensions that it sucks at, people get angry and shoot the messenger.
It’s just that when people try to tell the community that there’s a bunch of other more important dimensions that it sucks at, people get angry and shoot the messenger
There is however room for disagreement on just how much “more important” these “other dimensions” are.
(Not necessarily taking a position myself, mind you.)
I strongly agree, and lay the blame in part on Eliezer’s innate bombast, and in part on karma.
While karma is probably, on balance, a desirable effect, it’s also one hell of a catalyst for the halo effect. “I agree with this post/comment”, “This post/comment took a lot of work” and especially “It made me feel good” all mix into a sugary sauce of equal reward whether you provide a high-value contribution or whether you just titillate the right psychological zones.
That said, I don’t think this post is a very bad one—it provides some solid arguments for its thesis, and it wouldn’t be its fault if LessWrongers jumped on that +1 button on sheer self-congratulatory reflex.
I don’t blame people for upvoting things that make them feel good, and this post is indeed well written. I just don’t like this attitude I’ve seen over and over again. Flattery is pleasant, just don’t take it too seriously.
Maybe I’m just being habitually contrarian here for no good reason, but it seems to me that for a supposedly “rationalist” community, people here seem to be far too willing to accept claims of LessWrong exceptionality based on shockingly weak evidence
I’m not saying a view point on whether I agree or not with your premise, I don’t think this is the best group ever but I have not been here long enough to know if others do.
I would however like to point out
Yet, in spite of these priors, the group you consider yourself member of is somehow the true best group ever? Really? Where’s hard evidence for this? I’m tempted to point to Eliezer outright making things up on costs of cryonics multiple times, and ignoring corrections from me and others, in case halo effect prevents you from seeing that he’s not really extraordinarily less wrong.
is full of Ad hominem errors that to me distract from your argument
There’s no ad hominem here. The original post claims that LessWrong is great, and taw is pointing out some things that suggest that LessWrong is not great. An ad hominem here would be attacking Academian, not attacking Eliezer.
To a large extent, and especially at the time this was written LW was practically synonymous with Eliezer. Also, Taw is (at least primarily) referring to things Eliezer Said on LW, thus its seems pretty relevant to the question of LW’s greatness.
Qiaochu is questioning the presence of “ad hominem”. This issue doesn’t depend on the worth of the argument whose discussion hypothetically contains the error.
and Taw was attacking Eliezer because Eliezer is so associated with LW, and LW with him, that problems with one will often (at least be taken as) problems with the other. If Eliezer is systematically wrong, so is the sequences, and thereby probably LW too.
Maybe I’m just being habitually contrarian here for no good reason, but it seems to me that for a supposedly “rationalist” community, people here seem to be far too willing to accept claims of LessWrong exceptionality based on shockingly weak evidence. Group-serving bias is possibly the most basic of all human biases, and we cannot even overcome that little?
Claiming that your group is the best in the world, or among the best, is something nearly every single group in history did, and all had some anecdotal “evidence” for it. Priors are very strongly against this claim, even after including these anecdotes.
Yet, in spite of these priors, the group you consider yourself member of is somehow the true best group ever? Really? Where’s hard evidence for this? I’m tempted to point to Eliezer outright making things up on costs of cryonics multiple times, and ignoring corrections from me and others, in case halo effect prevents you from seeing that he’s not really extraordinarily less wrong.
You made up this ‘true best group ever’ idea yourself. “Best at a highly specific activity that is the primary focus of this group” is an entirely different claim.
Eliezer doesn’t have all that much of a halo. People disagree with him and criticise him incessantly. Sometimes deserved, sometimes not. Most times I have seen Eliezer accused of having a halo effect have been when Eliezer disagrees with them on a particular subject and it happens to be the case that the majority of others here do too. Acknowledging that those who disagree with you may be doing so independently based on their own intellectual backgrounds is not nearly so psychologically rewarding as dismissing them as blind followers.
Citation needed. Please do. I pay those costs out of pocket, they can be verified with my insurance agent if need be, and I should very much like to know what on Earth you think you are talking about.
I assume Taw is referring to this.
Eliezer reported what he (Eliezer) actually currently pays per year for term life insurance ($180) and his membership with the Cryonics Institute ($120). This is relevant for youngish people worried about the effect of cryonics on their near-term cash flow. Since he is buying term life insurance, when he renews it (probably after 20 years) he will have to pay higher premiums or have accumulated savings for the cost. The Cryonics Institute is also the cheapest service.
Taw said that this distracts from the total net present value of the stream of premium and membership costs, which has to be close to the net present value of just saving up to pay for the cryonics out of pocket (~$50,000 for CI in a distribution centered decades into the future) plus membership fees. Someone thinking about the tradeoff between cryonics and bequesting wealth to their kids or to charity would worry more about this number. Taw then says that Eliezer is “lying” for giving his current costs rather than this number.
However, that NPV is not the nominal amount of a payout decades into the future. A youngish person can get whole life insurance (where premiums do not increase with age). 24 year old User:AngryParsley pays $768 per year for a $200,000 payout life insurance policy. Over 50 years he will pay $38,400 in premiums, which will be invested by the insurance company (which expects to profit by winding up with more than $200,000 by the time of payout, on average).
There is an additional complicating factor when talking about cases decades into the future that doesn’t arise in Eliezer’s situation (youngish person wanting protection for the next few decades, with expectation of accumulating wealth over time), namely inflation in cryonics costs, but a policy such as AngryParsley’s leaves plenty of margin for that.
What? Why expect this to happen? Wouldn’t cryonics groups plan for this? They do explicitly say how much money is required to be set aside via insurance for people to join, while that could change, why expect them to renege on their promises (contracts? I’m not too familiar) to preserve people for the previously set amount of money?
I wouldn’t trust a business that didn’t plan for changes in the cost of its raw material commodities to so much as make ice for a lemonade stand, much less freeze people. A claim like yours should have some clarification.
When I talked to Alcor, they said that they had raised the cost to join for new members several times, but had never increased the costs for existing members. They also said not to take that as a guarantee that they would never raise the costs for existing members, because they wouldn’t guarantee that.
Please do.
There is a big difference between something being ‘the best group ever’ and being ‘an easier shortcut to rationality than digging through philosophical writings the old-fashioned way’, which is how I interpreted this post. There is a community component to LessWrong that obviously isn’t present in old books, but I don’t think that’s paramount for most people. For me, in the beginning, the Sequences were just a good way to read about interesting ideas in small, digestible chunks during my breaks at work. Now it’s a bit more than that; LessWrong gives me a chance to post my ideas where they’ll be criticized by people who don’t have any social-etiquette reason not to tear apart my arguments. But there’s a big difference between a group being the optimum, the best any group of its kind could be, which LessWrong obviously isn’t...and between being the best out of all the options in a limited area, which is more what this post is claiming (I think).
Reading books never was a good way to learn rationality. You need to learn it in practice, through discussion and debate, and you can do that in the context of mainstream philosophy because mainstream philosophy has its blogs and NGs too. (of course it doesn’t have a “community” with a leader, a set off canonical works and a number of not-very provable doctrines everyone is supposed to subscribe to—and it’s better for it).
I strongly agree.
To reply honestly to this, I think that LW is (close to) superlative in some dimensions. It’s just that when people try to tell the community that there’s a bunch of other more important dimensions that it sucks at, people get angry and shoot the messenger.
I agree. I haven’t found another online forum which I prefer. I agreed with taw because (a) I think that some people here do attach a halo to LW, viewing it as “The Way” in some generalized sense and (b) people forget that a fair portion of what appears on LW is well known within certain circles (c.f. Don’t Revere The Bearer Of Good Info ).
I’ve noticed examples of this sort of thing.
What kind of dimensions did you have in mind here?
There is however room for disagreement on just how much “more important” these “other dimensions” are.
(Not necessarily taking a position myself, mind you.)
I strongly agree, and lay the blame in part on Eliezer’s innate bombast, and in part on karma.
While karma is probably, on balance, a desirable effect, it’s also one hell of a catalyst for the halo effect. “I agree with this post/comment”, “This post/comment took a lot of work” and especially “It made me feel good” all mix into a sugary sauce of equal reward whether you provide a high-value contribution or whether you just titillate the right psychological zones.
That said, I don’t think this post is a very bad one—it provides some solid arguments for its thesis, and it wouldn’t be its fault if LessWrongers jumped on that +1 button on sheer self-congratulatory reflex.
I don’t blame people for upvoting things that make them feel good, and this post is indeed well written. I just don’t like this attitude I’ve seen over and over again. Flattery is pleasant, just don’t take it too seriously.
Upvoted. For needing to be said. Badly.
I’m not saying a view point on whether I agree or not with your premise, I don’t think this is the best group ever but I have not been here long enough to know if others do.
I would however like to point out
is full of Ad hominem errors that to me distract from your argument
There’s no ad hominem here. The original post claims that LessWrong is great, and taw is pointing out some things that suggest that LessWrong is not great. An ad hominem here would be attacking Academian, not attacking Eliezer.
Typo: Academician->Academian
Whoops. Thanks!
How does attacking Eliezer here add to the argument?
To a large extent, and especially at the time this was written LW was practically synonymous with Eliezer. Also, Taw is (at least primarily) referring to things Eliezer Said on LW, thus its seems pretty relevant to the question of LW’s greatness.
I think I understand now thank you.
Qiaochu is questioning the presence of “ad hominem”. This issue doesn’t depend on the worth of the argument whose discussion hypothetically contains the error.
and Taw was attacking Eliezer because Eliezer is so associated with LW, and LW with him, that problems with one will often (at least be taken as) problems with the other. If Eliezer is systematically wrong, so is the sequences, and thereby probably LW too.