I’m interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI.
If someone like Eliezer Yudkowsky reads this then they probably think that the most important congnitive bias you have is that you are not interested in AI :-)
[…] an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project […]
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI. In context EY’s comment does not at all seem to dismiss non-FAI concerns, but in your recap it does. Fie.
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI.
I linked to the original comment. I didn’t mention the third point because I think that it is abundantly clear that Less Wrong has been created with the goal in mind of getting people to support SI:
The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: ”...after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity...” (Reference: An interview with Eliezer Yudkowsky).
You can find a logo with a link to SI in the header and a logo and a link to LessWrong on SIAI’s frontpage.
LessWrong is mentioned as an achievement of SI (Quote: “Less Wrong is important to the Singularity Institute’s work towards a beneficial Singularity”).
A quote from the official SIAI homepage: “Less Wrong is [...] a key venue for SIAI recruitment”.
Now if you say that you don’t care about AI, that does pretty much exclude you from the group of people this community is meant to allure.
Nothing of what you just wrote justifies your changing the meaning of the comment you quoted by selectively removing parts of it you happen to think are not representative.
Regarding the rest of your comment: it both distorts history and makes irrelevant points. LessWrong was created as a rationality community, not an AI risk propaganda vehicle, even though yes, that was one of the goals (in fact, LW had an initial taboo period on the AI risk theme specifically to strengthen the other interests). The connections between LW and SIAI do not mean that one exists solely for the sake of the other. And finally, and most importantly, even if EY did create LW solely for the purpose of getting more money for SIAI—which I don’t believe—that’s no reason for other users of the site to obey the same imperative or share the same goal. I’m sympathetic towards SIAI but far from being convinced by them and I’m able to participate in LW just fine. I’m far from being alone in this. LW is what its userbase makes it.
The passive voice in “this community is meant to allure” makes it almost a meaningless statement. Who is doing the meaning? LW is what it is, and nobody has to care who it is “meant to allure”. It allures people who are drawn to topics discussed on it.
But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.
But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.
...
I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays...
...
…if Omega tells me that I’ve actually managed to do worse than nothing on Friendly AI, that of course has to change my opinion of how good I am at rationality or teaching others rationality,...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
...it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content.
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
then what does?
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
No one really cares all that much whether other lesswrong participants care about AI risk.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
You might further argue that it isn’t irrational not to worry about AI risk.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
Could you please tell me where I can find arguments that support that stance?
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
You might further argue that it isn’t irrational not to worry about AI risk.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!
If someone like Eliezer Yudkowsky reads this then they probably think that the most important congnitive bias you have is that you are not interested in AI :-)
A comment by Eliezer Yudkowsky:
You’re selectively misquoting that comment, in particular removing the third criterion of importance listed in it that has nothing to do with AI. In context EY’s comment does not at all seem to dismiss non-FAI concerns, but in your recap it does. Fie.
I linked to the original comment. I didn’t mention the third point because I think that it is abundantly clear that Less Wrong has been created with the goal in mind of getting people to support SI:
The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI seriously and therefore donate to SI: ”...after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity...” (Reference: An interview with Eliezer Yudkowsky).
Less Wrong is used to ask for donations.
You can find a logo with a link to SI in the header and a logo and a link to LessWrong on SIAI’s frontpage.
LessWrong is mentioned as an achievement of SI (Quote: “Less Wrong is important to the Singularity Institute’s work towards a beneficial Singularity”).
A quote from the official SIAI homepage: “Less Wrong is [...] a key venue for SIAI recruitment”.
Now if you say that you don’t care about AI, that does pretty much exclude you from the group of people this community is meant to allure.
Nothing of what you just wrote justifies your changing the meaning of the comment you quoted by selectively removing parts of it you happen to think are not representative.
Regarding the rest of your comment: it both distorts history and makes irrelevant points. LessWrong was created as a rationality community, not an AI risk propaganda vehicle, even though yes, that was one of the goals (in fact, LW had an initial taboo period on the AI risk theme specifically to strengthen the other interests). The connections between LW and SIAI do not mean that one exists solely for the sake of the other. And finally, and most importantly, even if EY did create LW solely for the purpose of getting more money for SIAI—which I don’t believe—that’s no reason for other users of the site to obey the same imperative or share the same goal. I’m sympathetic towards SIAI but far from being convinced by them and I’m able to participate in LW just fine. I’m far from being alone in this. LW is what its userbase makes it.
The passive voice in “this community is meant to allure” makes it almost a meaningless statement. Who is doing the meaning? LW is what it is, and nobody has to care who it is “meant to allure”. It allures people who are drawn to topics discussed on it.
Note that as Eliezer says here
I expect to be able to find at least a dozen quotes where he contradicts himself there, if I cared enough to spend that much time on looking for them. Here are just a few:
(Please read up on the context.)
...
...
Given the evidence I find it hard to believe that he does not care if lesswrong members do not believe that AI risk is the most important issue today. I also don’t think that he would call someone a rationalist who has read everything that he wrote and decided not to care about AI risk.
You’ve got selective quotation down to an art form. I’m a bit jealous.
While true as written, it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!