...it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content.
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
then what does?
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
No one really cares all that much whether other lesswrong participants care about AI risk.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
You might further argue that it isn’t irrational not to worry about AI risk.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
Could you please tell me where I can find arguments that support that stance?
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
You might further argue that it isn’t irrational not to worry about AI risk.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!
While true as written, it does not necessarily exclude the parent from the community as it is.
To argue this you would have to argue that as an average human being it would be rational not to care about AI. I welcome Eliezer or another SI member to tell me that I am wrong here. But I think that they believe that if you are interested in raising the sanity waterline and refining the art of rationality then you can’t at the same time ignore AI risk.
If you disagree with that then you disagree with Eliezer, who wrote the Sequences and who believes that they conclusively show that you should care about AI risk. If you disagree about this then you seem to fundamentally disagree with the idea of rationality this community was based on, or at least came to a different conclusion than the person who wrote most of its content. If that doesn’t exclude you from the community, then what does?
...or maybe you simply don’t share their utilitarian values.
No one really cares all that much whether other lesswrong participants care about AI risk. This isn’t an AI forum. We’ve had very few posts on the subject (relative to the level of interest in the subject that the authors of many of the posts happen to have). That subject was even banned here for a while.
Nothing, perhaps unfortunately. If I could freely wield an ‘exclude’-hammer (and there were no negative externalities for doing so) then consistent bad logic, excessive use of straw men and non-sequiturs, evident inability or unwillingness to learn and especially being overtly anti-social.
Maybe you could elaborate on this so that I understand it better. How could all those people who happen to contribute money to the mitigation of AI risk not “care all that much whether other lesswrong participants care about AI risk”? For that stance to make sense, the reason for their donations couldn’t possible be that they wish to support SI. Because then they would care a lot if other people took the cause seriously as well, since each person who takes AI risk seriously does increase the chance of SI receiving additional support.
And since this community is about rationality, and not just instrumental but also epistemic rationality, everyone who believes that AI risk is the most important issue that humanity faces right now should ask themselves why other rationalists do not care about it as well.
Either it is rational to be worried about AI risk or it isn’t. If it is rational then, given that this community is devoted to the art of rationality, one should care about people who do not take AI risk seriously. Since they might not only increase the support for that irrational stance but it does also hint at a general problem with the standards of the community and the effectiveness with which it teaches rationality.
You might argue that those people who do not care about AI risk simply have different preferences. I don’t see that there are many humans who have preferences that allow them to ignore their own demise and the end of all human values in the universe.
You might further argue that it isn’t irrational not to worry about AI risk. This might be true. Could you please tell me where I can find arguments that support that stance?
Donating a trivial amount to one charity is a big leap from ostracising all those that don’t.
It is irrational. It’s just not something there is any point being personally offended at or exclude from the local environment. On the other hand people not believing that correct reasoning about the likelyhood of events is that which most effectively approximates Bayesian updating have far more cause to be excluded from the site—because this is a site where that is a core premise.
I’m almost certain you are more likely to have collected such links than I. Because I care rather a lot less about controlling people’s beliefs on the subject.
On various occasions people have voiced an antipathy to my criticisms of AI risk. If the same people do not mind if other members do not care about AI risk, then it seems to be a valid conclusion that they don’t care what people believe as long as they do not criticize their own beliefs.
Those people might now qualify their position by stating that they only have an antipathy against poor criticisms of their beliefs. But this would imply that they do not mind people who do not care about AI risk for poor reasons as long as they do not voice their reasons.
But even a trivial amount of money is a bigger signal than the proclamation that you believe that people who do not care about AI risk are irrational and that they therefore do not fit the standards of this community. The former takes more effort than writing a comment stating the latter.
Other words could be used in the place of ‘poor’ that may more accurately convey what it is that bothers people. “Incessant” or “belligerent” would be two of the more polite examples of such. Some would also take issue with the “their beliefs” phrase, pointing out that the criticisms aren’t sufficiently informed to be actual criticisms of their beliefs rather than straw men.
It remains the case that people don’t care all that much whether other folks on lesswrong have a particular attitude to AI risk.
I read that as “The Leader says that we should care about X, so if you don’t care about X then you disagree with The Leader and must be shunned”. I have a lot of respect for Eliezer, but he is no god, and disagreement with him is not sacrilege. Hell, it’s right there in the name of the site—it’s not called “Right”, it’s called “Less Wrong”—as in, you’re still wrong about something, because everybody is.
I’m assuming one of these are true, because the alternative is some sort of reading comprehension failure. LW is not a community of people who lack irrational beliefs!