Let’s be explicit here—your suggestion is that people like me should not be here. I’m a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I’m interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I’ve read most of the core posts of LW, but haven’t gone through most of the sequences in any rigorous way (i.e. read them in order).
I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I’ve had several ideas for posts (in Discussion) that I don’t post, but I don’t think it meets the community’s expected quality standard.
Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the “raising the sanity line” mission. That’s not what I want.
Let’s be explicit here—your suggestion is that people like me should not be here. I’m a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus.
No martyrs allowed.
I don’t propose simply disallowing people who havn’t read everything from being taken seriously, if they don’t say anything stupid. It’s fine if you havn’t read the sequences and don’t care about AI or heavy philosophy stuff, I just don’t want to read dumb posts about those topics that come from someone having not read the stuff.
As a matter of fact, I was careful to not propose much of anything. Don’t confuse “here’s a problem that I would like solved” with “I endorse this stupid solution that you don’t like”.
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are “spouting garbage that’s been discussed to death” and aren’t being sufficiently punished for it, you could say that instead. If that’s not what you are concerned about, then I have failed to comprehend your message.
Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity’s main attempt to formalize morality, so I would expect some overlap with FAI.
I don’t mind people who haven’t read the sequences, as long as they don’t start spouting garbage that’s already been discussed to death and act all huffy when we tell them so; common failure modes are “Here’s an obvious solution to the whole FAI problem!”, “Morality all boils down to X”, and “You people are a cult, you need to listen to a brave outsider who’s willing to go against the herd like me”.
If you’re interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of “Don’t feed the clueless (just downvote them)” (this post suggests widening the sense of “clueless”), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren’t many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was—or seeing some comment I upvoted be at −2, and not knowing what I’m missing. So I’d like to ask everyone: if you downvote one comment for being wrong, but think the poster isn’t hopeless, please explain your downvote. It’s the only way to make the person stop being wrong.
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn’t have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently.
I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts—including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.)
Trying to explain what is usually simply assumed—to a listener who is at least willing to communicate in good faith—can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place.
There are many conversations where downvoting both sides of a discussion is advisable, yet it isn’t conversations with the “Clueless” that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
Sorry, I wasn’t clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
Sorry, I wasn’t clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
Oh, gotcha. I’m kind of surprised we don’t have a post on it yet. Lax of me!
I accept your criticism in the spirit it was intended—but I’m not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn’t appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion)
Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don’t agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a “good thing.”
Note that your example is not that pattern, and I haven’t responded to Clueless. C is anti-empiricism, but he hasn’t shown anything that makes me think that he has anything valuable to contribute to the community—he’s 100% confused. So it isn’t worth my time to try to persuade him to be less wrong.
I’m not the one who downvoted you, but if I were to hazard a guess, I’d say your were downvoted because when you start off by saying “people like me”, it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of “people like me”, you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms “less wrong” and “more wrong” to describe elites and peasants of LW, but I’m using them to point out to you the identity you’ve painted for yourself.)
Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, “Noobs will be noobs, let’s give up”.
At the very least, by establishing yourself as a member of “people like me”, you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong.
If by “giving up on trying to be less wrong,” you mean I’m never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful.
Raising the sanity line does not require any of those things.
Don’t put up straw men; I never said that to be less wrong, you had to do all those things. “less wrong” represents a attitude towards the world, not an endpoint.
I’m interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI.
If someone like Eliezer Yudkowsky reads this then they probably think that the most important congnitive bias you have is that you are not interested in AI :-)
[…] an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project […]
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI. In context EY’s comment does not at all seem to dismiss non-FAI concerns, but in your recap it does. Fie.
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI.
I linked to the original comment. I didn’t mention the third point because I think that it is abundantly clear that Less Wrong has been created with the goal in mind of getting people to support SI:
The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: ”...after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity...” (Reference: An interview with Eliezer Yudkowsky).
You can find a logo with a link to SI in the header and a logo and a link to LessWrong on SIAI’s frontpage.
LessWrong is mentioned as an achievement of SI (Quote: “Less Wrong is important to the Singularity Institute’s work towards a beneficial Singularity”).
A quote from the official SIAI homepage: “Less Wrong is [...] a key venue for SIAI recruitment”.
Now if you say that you don’t care about AI, that does pretty much exclude you from the group of people this community is meant to allure.
Nothing of what you just wrote justifies your changing the meaning of the comment you quoted by selectively removing parts of it you happen to think are not representative.
Regarding the rest of your comment: it both distorts history and makes irrelevant points. LessWrong was created as a rationality community, not an AI risk propaganda vehicle, even though yes, that was one of the goals (in fact, LW had an initial taboo period on the AI risk theme specifically to strengthen the other interests). The connections between LW and SIAI do not mean that one exists solely for the sake of the other. And finally, and most importantly, even if EY did create LW solely for the purpose of getting more money for SIAI—which I don’t believe—that’s no reason for other users of the site to obey the same imperative or share the same goal. I’m sympathetic towards SIAI but far from being convinced by them and I’m able to participate in LW just fine. I’m far from being alone in this. LW is what its userbase makes it.
The passive voice in “this community is meant to allure” makes it almost a meaningless statement. Who is doing the meaning? LW is what it is, and nobody has to care who it is “meant to allure”. It allures people who are drawn to topics discussed on it.
But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.
But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.
...
I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays...
...
…if Omega tells me that I’ve actually managed to do worse than nothing on Friendly AI, that of course has to change my opinion of how good I am at rationality or teaching others rationality,...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
...it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content.
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
then what does?
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
No one really cares all that much whether other lesswrong participants care about AI risk.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
You might further argue that it isn’t irrational not to worry about AI risk.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
Could you please tell me where I can find arguments that support that stance?
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
You might further argue that it isn’t irrational not to worry about AI risk.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!
Let’s be explicit here—your suggestion is that people like me should not be here. I’m a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I’m interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I’ve read most of the core posts of LW, but haven’t gone through most of the sequences in any rigorous way (i.e. read them in order).
I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I’ve had several ideas for posts (in Discussion) that I don’t post, but I don’t think it meets the community’s expected quality standard.
Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the “raising the sanity line” mission. That’s not what I want.
No martyrs allowed.
I don’t propose simply disallowing people who havn’t read everything from being taken seriously, if they don’t say anything stupid. It’s fine if you havn’t read the sequences and don’t care about AI or heavy philosophy stuff, I just don’t want to read dumb posts about those topics that come from someone having not read the stuff.
As a matter of fact, I was careful to not propose much of anything. Don’t confuse “here’s a problem that I would like solved” with “I endorse this stupid solution that you don’t like”.
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are “spouting garbage that’s been discussed to death” and aren’t being sufficiently punished for it, you could say that instead. If that’s not what you are concerned about, then I have failed to comprehend your message.
Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity’s main attempt to formalize morality, so I would expect some overlap with FAI.
I don’t mind people who haven’t read the sequences, as long as they don’t start spouting garbage that’s already been discussed to death and act all huffy when we tell them so; common failure modes are “Here’s an obvious solution to the whole FAI problem!”, “Morality all boils down to X”, and “You people are a cult, you need to listen to a brave outsider who’s willing to go against the herd like me”.
If you’re interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of “Don’t feed the clueless (just downvote them)” (this post suggests widening the sense of “clueless”), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren’t many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was—or seeing some comment I upvoted be at −2, and not knowing what I’m missing. So I’d like to ask everyone: if you downvote one comment for being wrong, but think the poster isn’t hopeless, please explain your downvote. It’s the only way to make the person stop being wrong.
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn’t have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently.
I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
While I usually share a similar sentiment, upon consideration I disagree with your prediction when it comes to the example conversation in question.
People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts—including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.)
Trying to explain what is usually simply assumed—to a listener who is at least willing to communicate in good faith—can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place.
There are many conversations where downvoting both sides of a discussion is advisable, yet it isn’t conversations with the “Clueless” that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
Startlingly, Googling “Debaters of Perfect Emptiness” turned up no hits. This is not the best of all possible worlds.
Think “Lawyer”, “Politician” or the bottom line.
Sorry, I wasn’t clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
Oh, gotcha. I’m kind of surprised we don’t have a post on it yet. Lax of me!
I accept your criticism in the spirit it was intended—but I’m not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn’t appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion)
Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don’t agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a “good thing.”
Note that your example is not that pattern, and I haven’t responded to Clueless. C is anti-empiricism, but he hasn’t shown anything that makes me think that he has anything valuable to contribute to the community—he’s 100% confused. So it isn’t worth my time to try to persuade him to be less wrong.
I’m stating an expectation of a policy’s effectiveness.
I think Monkeymind is deliberately trying to gather lots of negative karma as fast as possible. Maybe for a bet?
If the goal was −100, then writing should stop now (prediction).
I’m not the one who downvoted you, but if I were to hazard a guess, I’d say your were downvoted because when you start off by saying “people like me”, it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of “people like me”, you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms “less wrong” and “more wrong” to describe elites and peasants of LW, but I’m using them to point out to you the identity you’ve painted for yourself.)
Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, “Noobs will be noobs, let’s give up”.
If by “giving up on trying to be less wrong,” you mean I’m never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful.
Raising the sanity line does not require any of those things.
Don’t put up straw men; I never said that to be less wrong, you had to do all those things. “less wrong” represents a attitude towards the world, not an endpoint.
Then I do not understand what you mean when you say I am “giving up on trying to be less wrong”
Could I get an explanation for the downvotes?
If someone like Eliezer Yudkowsky reads this then they probably think that the most important congnitive bias you have is that you are not interested in AI :-)
A comment by Eliezer Yudkowsky:
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI. In context EY’s comment does not at all seem to dismiss non-FAI concerns, but in your recap it does. Fie.
I linked to the original comment. I didn’t mention the third point because I think that it is abundantly clear that Less Wrong has been created with the goal in mind of getting people to support SI:
The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: ”...after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity...” (Reference: An interview with Eliezer Yudkowsky).
Less Wrong is used to ask for donations.
You can find a logo with a link to SI in the header and a logo and a link to LessWrong on SIAI’s frontpage.
LessWrong is mentioned as an achievement of SI (Quote: “Less Wrong is important to the Singularity Institute’s work towards a beneficial Singularity”).
A quote from the official SIAI homepage: “Less Wrong is [...] a key venue for SIAI recruitment”.
Now if you say that you don’t care about AI, that does pretty much exclude you from the group of people this community is meant to allure.
Nothing of what you just wrote justifies your changing the meaning of the comment you quoted by selectively removing parts of it you happen to think are not representative.
Regarding the rest of your comment: it both distorts history and makes irrelevant points. LessWrong was created as a rationality community, not an AI risk propaganda vehicle, even though yes, that was one of the goals (in fact, LW had an initial taboo period on the AI risk theme specifically to strengthen the other interests). The connections between LW and SIAI do not mean that one exists solely for the sake of the other. And finally, and most importantly, even if EY did create LW solely for the purpose of getting more money for SIAI—which I don’t believe—that’s no reason for other users of the site to obey the same imperative or share the same goal. I’m sympathetic towards SIAI but far from being convinced by them and I’m able to participate in LW just fine. I’m far from being alone in this. LW is what its userbase makes it.
The passive voice in “this community is meant to allure” makes it almost a meaningless statement. Who is doing the meaning? LW is what it is, and nobody has to care who it is “meant to allure”. It allures people who are drawn to topics discussed on it.
Note that as Eliezer says here
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
...
...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
You’ve got selective quotation down to an art form. I’m a bit jealous.
While true as written, it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!