I am saddened that this doomerism has gained so much track in a community as great as LW
You’re aware that Less Wrong (and the project of applied rationality) literally began as EY’s effort to produce a cohort of humans capable of clearly recognizing the AGI problem?
I don’t think this is a productive way to engage here. Notwithstanding the fact that LW was started for this purpose—the ultimate point is to think clearly and correctly. If it’s true that AI will cause doom, we want to believe that AI will cause doom. If not, then not.
So I don’t think LW should be a “AI doomerist” community in the sense that people who honestly disagree with AI doom are somehow outside the scope of LW or not worth engaging with. EY is the founder, not a divinely inspired prophet. Of course, LW is and can continue to be an “AI doomerist” community in the more limited sense that most people here are persuaded by the arguments that P(doom) is relatively high—but in that sense this kind of argument you have made is really besides the point. It work equally well regardless of the value of P(doom) and thus should not be credited.
in that sense this kind of argument you have made is really besides the point
One interpretation of XFrequentist’s comment is simply pointing out that mukashi’s “doomerism has gained so much track” implies a wrong history. A corrected statement would be more like “doomerism hasn’t lost track”.
I don’t think this is a productive way to engage here.
A “way of engaging” shouldn’t go so far as to disincentivize factual correction.
Fair enough. I interpreted XFrequentist as presenting this argument as an argument that AI Doomerism is correct and/or that people skeptical of Doomerism shouldn’t post those skeptical views. But i see now how your interpretation is also plausible.
Indeed, as Vladmir gleaned, I just wanted to clarify that the historical roots of LW & AGI risk are deeper than might be immediately apparent, which could offer a better explanation for the prevalence of Doomerism than, like, EY enchanting us with his eyes or whatever.
You’re aware that Less Wrong (and the project of applied rationality) literally began as EY’s effort to produce a cohort of humans capable of clearly recognizing the AGI problem?
I don’t think this is a productive way to engage here. Notwithstanding the fact that LW was started for this purpose—the ultimate point is to think clearly and correctly. If it’s true that AI will cause doom, we want to believe that AI will cause doom. If not, then not.
So I don’t think LW should be a “AI doomerist” community in the sense that people who honestly disagree with AI doom are somehow outside the scope of LW or not worth engaging with. EY is the founder, not a divinely inspired prophet. Of course, LW is and can continue to be an “AI doomerist” community in the more limited sense that most people here are persuaded by the arguments that P(doom) is relatively high—but in that sense this kind of argument you have made is really besides the point. It work equally well regardless of the value of P(doom) and thus should not be credited.
One interpretation of XFrequentist’s comment is simply pointing out that mukashi’s “doomerism has gained so much track” implies a wrong history. A corrected statement would be more like “doomerism hasn’t lost track”.
A “way of engaging” shouldn’t go so far as to disincentivize factual correction.
Fair enough. I interpreted XFrequentist as presenting this argument as an argument that AI Doomerism is correct and/or that people skeptical of Doomerism shouldn’t post those skeptical views. But i see now how your interpretation is also plausible.
Indeed, as Vladmir gleaned, I just wanted to clarify that the historical roots of LW & AGI risk are deeper than might be immediately apparent, which could offer a better explanation for the prevalence of Doomerism than, like, EY enchanting us with his eyes or whatever.