If you tighten your reference class even further to include only historical biological attacks by individuals or small groups, the one with the most deaths is just five, in the 2001 anthrax attacks.
It’s worth noting that the attacks were either done by Bruce Edwards Ivins who was paid out of funds to defend against bioattacks or someone in his vicinity.
It seems strange to me that the recommendations you make don’t take that into account.
The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me. It’s just an easy position to hold because it’s not inconvenient for people with power.
The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me.
I agree it’s definitely wrong today. I’m concerned it may stop being wrong in the future if we don’t get our act together, because biology is currently democratizing quickly while the number of people at top labs is relatively constant.
I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I’ve been focusing on and more like better policies at labs and not engaging in particular kinds of risky research. I’m excited for other people to work on these!
(Also, the second half of my list and Esvelt’s “Detect” and “Defend” apply regardless of where the attack originates.)
I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I’ve been focusing on and more like better policies at labs and not engaging in particular kinds of risky research.
Out of your proposal, it seems to me that the LLM question is a policy question. Faster evaluation of vaccines also is a lot about policy.
In general, that sentiment sounds a bit like “It’s easy to search the keys under the lampost, so that’s what I will do”.
Esvelt doesn’t have in his threat model “people who work on vaccines release the pathogen for their own gain” the way Bruce Edwards Ivins did according to the FBI.
Esvelt does say dangerous things like “Only after intense discussions at the famous Asilomar conference of 1975 did they correctly conclude that recombinant DNA within carefully chosen laboratory-adapted constructs posed no risk of spreading on its own.”
While you might argue that the amount of risk is acceptable, pretending that it’s zero makes Kevin Esvelt not have that much credibility when it comes to the actual act of reducing risk. He lists a bunch of interventions that EA funders can spend their money so that they can feel like they are taking action effective action about biorisk while not addressing the center of the risk.
It’s worth noting that the attacks were either done by Bruce Edwards Ivins who was paid out of funds to defend against bioattacks or someone in his vicinity.
It seems strange to me that the recommendations you make don’t take that into account.
The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me. It’s just an easy position to hold because it’s not inconvenient for people with power.
I agree it’s definitely wrong today. I’m concerned it may stop being wrong in the future if we don’t get our act together, because biology is currently democratizing quickly while the number of people at top labs is relatively constant.
I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I’ve been focusing on and more like better policies at labs and not engaging in particular kinds of risky research. I’m excited for other people to work on these!
(Also, the second half of my list and Esvelt’s “Detect” and “Defend” apply regardless of where the attack originates.)
Out of your proposal, it seems to me that the LLM question is a policy question. Faster evaluation of vaccines also is a lot about policy.
In general, that sentiment sounds a bit like “It’s easy to search the keys under the lampost, so that’s what I will do”.
Esvelt doesn’t have in his threat model “people who work on vaccines release the pathogen for their own gain” the way Bruce Edwards Ivins did according to the FBI.
Esvelt does say dangerous things like “Only after intense discussions at the famous Asilomar conference of 1975 did they correctly conclude that recombinant DNA within carefully chosen laboratory-adapted constructs posed no risk of spreading on its own.”
While you might argue that the amount of risk is acceptable, pretending that it’s zero makes Kevin Esvelt not have that much credibility when it comes to the actual act of reducing risk. He lists a bunch of interventions that EA funders can spend their money so that they can feel like they are taking action effective action about biorisk while not addressing the center of the risk.