In particular, consider covid. It seems reasonably likely that covid was an accidental lab leak (though attribution is hard) and it also seems like it wouldn’t have been that hard to engineer covid in a lab. And the damage from covid is clearly extremely high. Much higher than the anthrax attacks you mention. I people in biosecurity think that the tails are more like billions dead or the end of civilization. (I’m not sure if I believe them, the public object level cases for this don’t seem that amazing due to info-hazard concerns.)
I agree that if future open source models contribute substantially to the risk of something like covid, that would be a component in a good argument for banning them.
I’m dubious—haven’t seen much evidence—that covid itself is evidence that future open source models would so contribute? Given that—to the best of my very limited knowledge—the research being conducted was pretty basic (knowledgewise) but rather expensive (equipment and timewise), so that an LLM wouldn’t have removed a blocker. (I mean, that’s why it came from a US and Chinese-government sponsored lab for whom resources were not an issue, no?) If there is an argument to this effect, 100% agree it is relevant. But I haven’t looked into the sources of Covid for years anyhow, so I’m super fuzzy on this.
Further, suppose that open-source’d AI models could assist substantially with curing cancer. In that world, what probability would you assign to these AIs also assisting substantially with bioterror?
Fair point. Certainly more than in the other world.
I do think that your story is a reasonable mean between the two, with less intentionality, which is a reasonable prior for organizations in general.
I think the prior of “we should evaluate thing ongoingly and be careful about LLMs” when contrasted with “we are releasing this information on how to make plagues in raw form into the wild every day with no hope of retracting it right now” simply is an unjustified focus of one’s hypothesis on LLMs causing dangers, against all the other things in the world more directly contributing to the problem. I think a clear exposition of why I’m wrong about this would be more valuable than any of the experiments I’ve outlined.
I agree that if future open source models contribute substantially to the risk of something like covid, that would be a component in a good argument for banning them.
I’m dubious—haven’t seen much evidence—that covid itself is evidence that future open source models would so contribute? Given that—to the best of my very limited knowledge—the research being conducted was pretty basic (knowledgewise) but rather expensive (equipment and timewise), so that an LLM wouldn’t have removed a blocker. (I mean, that’s why it came from a US and Chinese-government sponsored lab for whom resources were not an issue, no?) If there is an argument to this effect, 100% agree it is relevant. But I haven’t looked into the sources of Covid for years anyhow, so I’m super fuzzy on this.
Fair point. Certainly more than in the other world.
I do think that your story is a reasonable mean between the two, with less intentionality, which is a reasonable prior for organizations in general.
I think the prior of “we should evaluate thing ongoingly and be careful about LLMs” when contrasted with “we are releasing this information on how to make plagues in raw form into the wild every day with no hope of retracting it right now” simply is an unjustified focus of one’s hypothesis on LLMs causing dangers, against all the other things in the world more directly contributing to the problem. I think a clear exposition of why I’m wrong about this would be more valuable than any of the experiments I’ve outlined.