There’s a three-pronged answer to this, as I see it.
First: there’s a tacit moratorium on partisan-coded issues around here which do not directly concern the science of rationality or (to a lesser extent) AI. Even those on which a broad consensus exists: the reasoning most often given is that a vocally partisan position on such topics would position LW to attract like-minded partisans and thus dilute its rationality focus. The politics of religion is something of an exception; it’s essentially treated as a uniquely valuable example of certain biases, though I suspect that status in practice has more to do with the grandfather clause. Anthropogenic global warming is not a uniquely valuable example of any bias I can think of; it’s a salient one, but salience often comes with drawbacks.
Second: LW is not a debunking blog, nor a forum dedicated to cheering scientific consensus over folk wisdom, and it should not be except insofar as doing so serves the art and science of rational thinking. There’s considerable overlap between LW’s natural audience and that of sites which are devoted to those topics, which has on occasion misled (often ideologically opposed) newcomers into thinking it’s such a site, but even if a general consensus exists that LW’s theory and practice tends to lead to certain positions, it behooves us to guard against adopting those positions as markers of group identity. The easiest way to do that is not to talk about them.
Third, and probably most embarrassingly from the standpoint of healthy group epistemology: by the last census/surveyLW is disproportionately politically libertarian, though adherents of that ideology are an absolute minority ([left-]liberalism is slightly more popular, socialism slightly less, other political theories much less). The severity of, proper response to, and to a lesser extent existence of anthropogenic global warming remains an active topic of debate in libertarian circles, though less so in recent years. Higher sensitivity to AGW than to other conservative-coded positions may in part be a response to these demographics.
I understand that, but would AI be able to stay an exception if any particular risks become as controversial as AGW ?
With regards to the global warming, if you provisionally take that rational person tends to have a stance on the AGW which is in alignment with scientific consensus, then the AGW supporters that join the issue are better on average at rationality; especially the applied rationality; not worse. If you, however, proposition that rational person tends to have a stance on the AGW in disagreement with the scientific consensus—then okay, that is a very valid point that you don’t want those aligned with scientific consensus to join in. Furthermore I don’t see what’s so special about religion.
I am a sort of atheist, but I see the support for atheism as much, much more shaky than support for AGW, and I know many people who are theists of various kinds, and are otherwise quite rational, while I do not know anyone even remotely rational in disagreement with scientific consensus, who is not a scientist doing novel research that disagrees with the consensus personally himself.
If AI in general or uFAI in paticular becomes a politicized issue (not quite identical to “controversial”) to the extent that AGW now is, I suspect it’ll be grandfathered in here by the same mechanism that religion now is; it’s too near and dear a topic to too many critical community members for it to ever be entirely dismissed. However, its relative prominence might go down a notch or two—moves to promote this may already be happening, given the Center for Modern Rationality’s upcoming differentiation from SIAI.
As to applied rationality and AGW: I view agreement with the mainstream climatology position as weak but positive evidence of sanity. However, I don’t view it as particularly significant to the LW mission in a direct sense, and I think taking a vocal position on the subject would likely lower the sanity waterline by way of scaring off ideologically biased folks who might be convinced to become less ideologically biased by a consciously nonpartisan approach. There’s a lot more to lose here, rationality-wise, than there is to gain.
Well, that’s too bad then. I came to post there after reading the Eliezer posts on the many worlds interpretation, where he tried to debunk the SI (now that really polarizes people politically, even though its not linked to politics. Trying to debunk a well established method that works). He is somewhat sloppy at quantum mechanics, and makes some technical errors, but it is very good content nonetheless that I really enjoyed. I don’t enjoy the meta-meta so much.
There’s a three-pronged answer to this, as I see it.
First: there’s a tacit moratorium on partisan-coded issues around here which do not directly concern the science of rationality or (to a lesser extent) AI. Even those on which a broad consensus exists: the reasoning most often given is that a vocally partisan position on such topics would position LW to attract like-minded partisans and thus dilute its rationality focus. The politics of religion is something of an exception; it’s essentially treated as a uniquely valuable example of certain biases, though I suspect that status in practice has more to do with the grandfather clause. Anthropogenic global warming is not a uniquely valuable example of any bias I can think of; it’s a salient one, but salience often comes with drawbacks.
Second: LW is not a debunking blog, nor a forum dedicated to cheering scientific consensus over folk wisdom, and it should not be except insofar as doing so serves the art and science of rational thinking. There’s considerable overlap between LW’s natural audience and that of sites which are devoted to those topics, which has on occasion misled (often ideologically opposed) newcomers into thinking it’s such a site, but even if a general consensus exists that LW’s theory and practice tends to lead to certain positions, it behooves us to guard against adopting those positions as markers of group identity. The easiest way to do that is not to talk about them.
Third, and probably most embarrassingly from the standpoint of healthy group epistemology: by the last census/survey LW is disproportionately politically libertarian, though adherents of that ideology are an absolute minority ([left-]liberalism is slightly more popular, socialism slightly less, other political theories much less). The severity of, proper response to, and to a lesser extent existence of anthropogenic global warming remains an active topic of debate in libertarian circles, though less so in recent years. Higher sensitivity to AGW than to other conservative-coded positions may in part be a response to these demographics.
I understand that, but would AI be able to stay an exception if any particular risks become as controversial as AGW ?
With regards to the global warming, if you provisionally take that rational person tends to have a stance on the AGW which is in alignment with scientific consensus, then the AGW supporters that join the issue are better on average at rationality; especially the applied rationality; not worse. If you, however, proposition that rational person tends to have a stance on the AGW in disagreement with the scientific consensus—then okay, that is a very valid point that you don’t want those aligned with scientific consensus to join in. Furthermore I don’t see what’s so special about religion.
I am a sort of atheist, but I see the support for atheism as much, much more shaky than support for AGW, and I know many people who are theists of various kinds, and are otherwise quite rational, while I do not know anyone even remotely rational in disagreement with scientific consensus, who is not a scientist doing novel research that disagrees with the consensus personally himself.
If AI in general or uFAI in paticular becomes a politicized issue (not quite identical to “controversial”) to the extent that AGW now is, I suspect it’ll be grandfathered in here by the same mechanism that religion now is; it’s too near and dear a topic to too many critical community members for it to ever be entirely dismissed. However, its relative prominence might go down a notch or two—moves to promote this may already be happening, given the Center for Modern Rationality’s upcoming differentiation from SIAI.
As to applied rationality and AGW: I view agreement with the mainstream climatology position as weak but positive evidence of sanity. However, I don’t view it as particularly significant to the LW mission in a direct sense, and I think taking a vocal position on the subject would likely lower the sanity waterline by way of scaring off ideologically biased folks who might be convinced to become less ideologically biased by a consciously nonpartisan approach. There’s a lot more to lose here, rationality-wise, than there is to gain.
Well, that’s too bad then. I came to post there after reading the Eliezer posts on the many worlds interpretation, where he tried to debunk the SI (now that really polarizes people politically, even though its not linked to politics. Trying to debunk a well established method that works). He is somewhat sloppy at quantum mechanics, and makes some technical errors, but it is very good content nonetheless that I really enjoyed. I don’t enjoy the meta-meta so much.