Fair enough. I interpreted XFrequentist as presenting this argument as an argument that AI Doomerism is correct and/or that people skeptical of Doomerism shouldn’t post those skeptical views. But i see now how your interpretation is also plausible.
Indeed, as Vladmir gleaned, I just wanted to clarify that the historical roots of LW & AGI risk are deeper than might be immediately apparent, which could offer a better explanation for the prevalence of Doomerism than, like, EY enchanting us with his eyes or whatever.
Fair enough. I interpreted XFrequentist as presenting this argument as an argument that AI Doomerism is correct and/or that people skeptical of Doomerism shouldn’t post those skeptical views. But i see now how your interpretation is also plausible.
Indeed, as Vladmir gleaned, I just wanted to clarify that the historical roots of LW & AGI risk are deeper than might be immediately apparent, which could offer a better explanation for the prevalence of Doomerism than, like, EY enchanting us with his eyes or whatever.