In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI.
It goes back further than that. Pandemic (especially the bioengineered type) was also rated as the greater risk in the 2012 survey,, and also the most feared in the 2011 survey, which was the earliest one I could find that asked the question.
It seems like it has been one of the global catastrophic risks we’ve taken most seriously here at LessWrong from the beginning. It’s one of our cached memes. It’s a large part of the reason that we rationalists, as a subculture, were able to react to the coronavirus threat so much more quickly than the mainstream. It was a possibility we had considered seriously a decade before it happened.
It goes back further than that. Pandemic (especially the bioengineered type) was also rated as the greater risk in the 2012 survey,, and also the most feared in the 2011 survey, which was the earliest one I could find that asked the question.
It seems like it has been one of the global catastrophic risks we’ve taken most seriously here at LessWrong from the beginning. It’s one of our cached memes. It’s a large part of the reason that we rationalists, as a subculture, were able to react to the coronavirus threat so much more quickly than the mainstream. It was a possibility we had considered seriously a decade before it happened.