Biological global catastrophic risks were neglected for years, while AGI risks were on the top. The main reason for this is that AGI was presented as a powerful superintelligent optimizer, and germs were just simple mindless replicators.
I think that that is an inaccurate description of why people on LessWrong have focused on AI risk over pandemic risk.
A pandemic certainly could be an existential risk, but the chance of that seems to be low. COVID-19 is a once-in-a-century level event, and its worst case scenario is killing ~2% of the human population. Completely horrible, yes, but not at all an existential threat to humanity. Given that there hasn’t been an existential threat from a pandemic before in history, it seems unlikely that one would happen in the next few hundred years. On the other hand, AI risk is relevant in the coming century, or perhaps sooner (decades?). It at least seems plausible to me that the danger from the two is on the same order of magnitude, and that humans should pay roughly equal attention to the x-risk from both.
However, while there are many people out there who have been working very hard on pandemic control, there aren’t many who focus on AI risk. The WHO has many researchers specializing in pandemics, along with scientists across nations, while the closest thing for AI safety might be MIRI or FHI, meaning that an individual on LW might have an impact to AI risk in a sense that an individual wouldn’t have an impact to pandemic risk. On top of that, the crowd on LW tends to be geared towards working on AI (knowledge of software, philosophy) and not so much geared towards pandemic risk (knowledge of biology, epidemiology).
I think that that is an inaccurate description of why people on LessWrong have focused on AI risk over pandemic risk.
A pandemic certainly could be an existential risk, but the chance of that seems to be low. COVID-19 is a once-in-a-century level event, and its worst case scenario is killing ~2% of the human population. Completely horrible, yes, but not at all an existential threat to humanity. Given that there hasn’t been an existential threat from a pandemic before in history, it seems unlikely that one would happen in the next few hundred years. On the other hand, AI risk is relevant in the coming century, or perhaps sooner (decades?). It at least seems plausible to me that the danger from the two is on the same order of magnitude, and that humans should pay roughly equal attention to the x-risk from both.
However, while there are many people out there who have been working very hard on pandemic control, there aren’t many who focus on AI risk. The WHO has many researchers specializing in pandemics, along with scientists across nations, while the closest thing for AI safety might be MIRI or FHI, meaning that an individual on LW might have an impact to AI risk in a sense that an individual wouldn’t have an impact to pandemic risk. On top of that, the crowd on LW tends to be geared towards working on AI (knowledge of software, philosophy) and not so much geared towards pandemic risk (knowledge of biology, epidemiology).
Finally, while they weren’t the top priority, LW has definitely talked about pandemic risk over the years. See the results on https://duckduckgo.com/?q=pandemic+risk+site%3Alesswrong.com+-coronavirus+-covid&t=ffab&ia=web