It seems reasonably likely that covid was an accidental lab leak (though attribution is hard) and it also seems like it wouldn’t have been that hard to engineer covid in a lab.
Seems like a positive update on human-caused bioterrorism, right? It’s so easy to let stuff leak that covid accidentally gets out, and it might even have been easy to engineer, but (apparently) no one engineered it, nor am I aware of this kind of intentional bioterrorism happening in other places. People apparently aren’t doing it. See Gwern’s Terrorism is not effective.
Maybe smart LLMs come out. I bet people still won’t be doing it.
So what’s the threat model? One can say “tail risks”, but—as OP points out—how much do LLMs really accelerate people’s ability to deploy dangerous pathogens, compared to current possibilities? And what off-the-cuff probabilities are we talking about, here?
Seems like a positive update on human-caused bioterrorism, right? It’s so easy to let stuff leak that covid accidentally gets out, and it might even have been easy to engineer, but (apparently) no one engineered it, nor am I aware of this kind of intentional bioterrorism happening in other places. People apparently aren’t doing it. See Gwern’s Terrorism is not effective.
Maybe smart LLMs come out. I bet people still won’t be doing it.
So what’s the threat model? One can say “tail risks”, but—as OP points out—how much do LLMs really accelerate people’s ability to deploy dangerous pathogens, compared to current possibilities? And what off-the-cuff probabilities are we talking about, here?