It sounds like it was a hypothetical estimate, not a best guess
Thanks for checking the transcript! I don’t know how seriously you want to take this but in conversation (in person) he said 5% was one of several different estimates he’d heard from virologists. This is a tricky area because it’s not clear we want a bunch of effort going into getting a really good estimate, since (a) if it turns out the probability is high then publicizing that fact likely means increasing the chance we get one and (b) building general knowledge on how to estimate the pandemic potential of viruses seems also likely net negative.
Here’s another source which calculates that the annual probability of more than 100M influenza deaths is 0.01% …
I think maybe we are talking about estimating different things? The 5% estimate was how likely you are to get a 1918 flu pandemic conditional on release.
A few experts believe that LLMs could already or soon will be able to generate ideas for simple variants of existing pathogens that could be more harmful than those that occur naturally, drawing on published research and other sources. Some experts also believe that LLMs will soon be able to access more specialized, open-source AI biodesign tools and successfully use them to generate a wide range of potential biological designs. In this way, the biosecurity implications of LLMs are linked with the capabilities of AI biodesign tools.
5% was one of several different estimates he’d heard from virologists.
Thanks, this is helpful. And I agree there’s a disanalogy between the 1918 hypothetical and the source.
it’s not clear we want a bunch of effort going into getting a really good estimate, since (a) if it turns out the probability is high then publicizing that fact likely means increasing the chance we get one and (b) building general knowledge on how to estimate the pandemic potential of viruses seems also likely net negative.
This seems like it might be overly cautious. Bioterrorism is already quite salient, especially with Rishi Sunak, the White House, and many mainstream media outlets speaking publicly about it. Even SecureBio is writing headline-grabbing papers about how AI can be used to cause pandemics. In that environment, I don’t think biologists and policymakers should refrain from gathering evidence about biorisks and how to combat them. The contribution to public awareness would be relatively small, and the benefits of a better understanding of the risks could lead to a net improvement in biosecurity.
For example, estimating the probability that known pathogens would cause 100M+ deaths if released is an extremely important question for deciding whether open source LLMs should be banned. If the answer is demonstrably yes, I’d expect the White House to significantly restrict open source LLMs within a year or two. This benefit would be far greater than the cost of raising the issue’s salience.
Thanks for checking the transcript! I don’t know how seriously you want to take this but in conversation (in person) he said 5% was one of several different estimates he’d heard from virologists. This is a tricky area because it’s not clear we want a bunch of effort going into getting a really good estimate, since (a) if it turns out the probability is high then publicizing that fact likely means increasing the chance we get one and (b) building general knowledge on how to estimate the pandemic potential of viruses seems also likely net negative.
I think maybe we are talking about estimating different things? The 5% estimate was how likely you are to get a 1918 flu pandemic conditional on release.
More from the NTI report:
Thanks, this is helpful. And I agree there’s a disanalogy between the 1918 hypothetical and the source.
This seems like it might be overly cautious. Bioterrorism is already quite salient, especially with Rishi Sunak, the White House, and many mainstream media outlets speaking publicly about it. Even SecureBio is writing headline-grabbing papers about how AI can be used to cause pandemics. In that environment, I don’t think biologists and policymakers should refrain from gathering evidence about biorisks and how to combat them. The contribution to public awareness would be relatively small, and the benefits of a better understanding of the risks could lead to a net improvement in biosecurity.
For example, estimating the probability that known pathogens would cause 100M+ deaths if released is an extremely important question for deciding whether open source LLMs should be banned. If the answer is demonstrably yes, I’d expect the White House to significantly restrict open source LLMs within a year or two. This benefit would be far greater than the cost of raising the issue’s salience.