Finally (again, as also mentioned by others), anthrax is not the important comparison here, it’s the acquisition or engineering of other highly transmissible agents that can cause a pandemic from a single (or at least, single digit) transmission event.
At least one paper that I mention specifically gives anthrax as an example of the kind of thing that LLMs could help with, and I’ve seen the example used in other places. I think if people bring it up as a danger it’s ok for me to use it as a comparison.
LLMs are useful isn’t just because they’re information regurgitators, but because they’re basically cheap domain experts. The most capable LLMs (like Claude and GPT4) can ~basically already be used like a tutor to explain complex scientific concepts, including the nuances of experimental design or reverse genetics or data analysis.
I’m somewhat dubious that a tutor to specifically help explain how to make a plague is going to be that much more use than a tutor to explain biotech generally. Like, the reason that this is called “dual-use” is that for every bad application there’s an innocuous application.
So, if the proposal is to ban open source LLMs because they can explain the bad applications of the in-itself innocuous thing—I just think that’s unlikely to matter? If you’re unable to rephrase a question in an innocuous way to some LLM, you probably aren’t gonna make a bioweapon even with the LLMs help, no disrespect intended to the stupid terrorists among us.
It’s kinda hard for me to picture a world where the delta in difficulty in making a biological weapon between (LLM explains biotech) and (LLM explains weapon biotech) is in any way a critical point along the biological weapons creation chain. Is that the world we think we live in? Is this the specific point you’re critiquing?
If the proposal is to ban all explanation of biotechnology from LLMs and to ensure it can only be taught by humans to humans, well, I mean, I think that’s a different matter, and I could address the pros and cons, but I think you should be clear about that being the actual proposal.
For instance, the post says that “if open source AI accelerated the cure for several forms of cancer, then even a hundred such [Anthrax attacks] could easily be worth it”. This is confusing for a few different reasons: first, it doesn’t seem like open-source LLMs can currently do much to accelerate cancer cures, so I’m assuming this is forecasting into the future. But then why not do the same for bioweapons capabilities?
This makes sense as a critique: I do think that actual biotech-specific models are much, much more likely to be used for biotech research than LLMs.
I also think that there’s a chance that LLMs could speed up lab work, but in a pretty generic way like Excel speeds up lab work—this would probably be good overall, because increasing the speed of lab work by 40% and terrorist lab work by 40% seems like a reasonably good thing for the world overall. I overall mostly don’t expect big breakthroughs to come from LLMs.
At least one paper that I mention specifically gives anthrax as an example of the kind of thing that LLMs could help with, and I’ve seen the example used in other places. I think if people bring it up as a danger it’s ok for me to use it as a comparison.
I’m somewhat dubious that a tutor to specifically help explain how to make a plague is going to be that much more use than a tutor to explain biotech generally. Like, the reason that this is called “dual-use” is that for every bad application there’s an innocuous application.
So, if the proposal is to ban open source LLMs because they can explain the bad applications of the in-itself innocuous thing—I just think that’s unlikely to matter? If you’re unable to rephrase a question in an innocuous way to some LLM, you probably aren’t gonna make a bioweapon even with the LLMs help, no disrespect intended to the stupid terrorists among us.
It’s kinda hard for me to picture a world where the delta in difficulty in making a biological weapon between (LLM explains biotech) and (LLM explains weapon biotech) is in any way a critical point along the biological weapons creation chain. Is that the world we think we live in? Is this the specific point you’re critiquing?
If the proposal is to ban all explanation of biotechnology from LLMs and to ensure it can only be taught by humans to humans, well, I mean, I think that’s a different matter, and I could address the pros and cons, but I think you should be clear about that being the actual proposal.
This makes sense as a critique: I do think that actual biotech-specific models are much, much more likely to be used for biotech research than LLMs.
I also think that there’s a chance that LLMs could speed up lab work, but in a pretty generic way like Excel speeds up lab work—this would probably be good overall, because increasing the speed of lab work by 40% and terrorist lab work by 40% seems like a reasonably good thing for the world overall. I overall mostly don’t expect big breakthroughs to come from LLMs.