I understand that, but would AI be able to stay an exception if any particular risks become as controversial as AGW ?
With regards to the global warming, if you provisionally take that rational person tends to have a stance on the AGW which is in alignment with scientific consensus, then the AGW supporters that join the issue are better on average at rationality; especially the applied rationality; not worse. If you, however, proposition that rational person tends to have a stance on the AGW in disagreement with the scientific consensus—then okay, that is a very valid point that you don’t want those aligned with scientific consensus to join in. Furthermore I don’t see what’s so special about religion.
I am a sort of atheist, but I see the support for atheism as much, much more shaky than support for AGW, and I know many people who are theists of various kinds, and are otherwise quite rational, while I do not know anyone even remotely rational in disagreement with scientific consensus, who is not a scientist doing novel research that disagrees with the consensus personally himself.
If AI in general or uFAI in paticular becomes a politicized issue (not quite identical to “controversial”) to the extent that AGW now is, I suspect it’ll be grandfathered in here by the same mechanism that religion now is; it’s too near and dear a topic to too many critical community members for it to ever be entirely dismissed. However, its relative prominence might go down a notch or two—moves to promote this may already be happening, given the Center for Modern Rationality’s upcoming differentiation from SIAI.
As to applied rationality and AGW: I view agreement with the mainstream climatology position as weak but positive evidence of sanity. However, I don’t view it as particularly significant to the LW mission in a direct sense, and I think taking a vocal position on the subject would likely lower the sanity waterline by way of scaring off ideologically biased folks who might be convinced to become less ideologically biased by a consciously nonpartisan approach. There’s a lot more to lose here, rationality-wise, than there is to gain.
Well, that’s too bad then. I came to post there after reading the Eliezer posts on the many worlds interpretation, where he tried to debunk the SI (now that really polarizes people politically, even though its not linked to politics. Trying to debunk a well established method that works). He is somewhat sloppy at quantum mechanics, and makes some technical errors, but it is very good content nonetheless that I really enjoyed. I don’t enjoy the meta-meta so much.
I understand that, but would AI be able to stay an exception if any particular risks become as controversial as AGW ?
With regards to the global warming, if you provisionally take that rational person tends to have a stance on the AGW which is in alignment with scientific consensus, then the AGW supporters that join the issue are better on average at rationality; especially the applied rationality; not worse. If you, however, proposition that rational person tends to have a stance on the AGW in disagreement with the scientific consensus—then okay, that is a very valid point that you don’t want those aligned with scientific consensus to join in. Furthermore I don’t see what’s so special about religion.
I am a sort of atheist, but I see the support for atheism as much, much more shaky than support for AGW, and I know many people who are theists of various kinds, and are otherwise quite rational, while I do not know anyone even remotely rational in disagreement with scientific consensus, who is not a scientist doing novel research that disagrees with the consensus personally himself.
If AI in general or uFAI in paticular becomes a politicized issue (not quite identical to “controversial”) to the extent that AGW now is, I suspect it’ll be grandfathered in here by the same mechanism that religion now is; it’s too near and dear a topic to too many critical community members for it to ever be entirely dismissed. However, its relative prominence might go down a notch or two—moves to promote this may already be happening, given the Center for Modern Rationality’s upcoming differentiation from SIAI.
As to applied rationality and AGW: I view agreement with the mainstream climatology position as weak but positive evidence of sanity. However, I don’t view it as particularly significant to the LW mission in a direct sense, and I think taking a vocal position on the subject would likely lower the sanity waterline by way of scaring off ideologically biased folks who might be convinced to become less ideologically biased by a consciously nonpartisan approach. There’s a lot more to lose here, rationality-wise, than there is to gain.
Well, that’s too bad then. I came to post there after reading the Eliezer posts on the many worlds interpretation, where he tried to debunk the SI (now that really polarizes people politically, even though its not linked to politics. Trying to debunk a well established method that works). He is somewhat sloppy at quantum mechanics, and makes some technical errors, but it is very good content nonetheless that I really enjoyed. I don’t enjoy the meta-meta so much.