Exactly. I’m getting frustrated when we talk about risks from AI systems with the open source or e/acc communities. The open source community seems to consistently assume the case that the concerns are about current AI systems and the current systems are enough to lead to significant biorisk. Nobody serious is claiming this and it‘s not what I’m seeing in any policy document or paper. And this difference in starting points between the AI Safety community and open source community pretty much makes all the difference.
Sometimes I wonder if the open source community is making this assumption on purpose because it is rhetorically useful to say “oh you think a little chatbot that has the same information as a library would cause a global disaster?” It’s a common tactic these days, downplay the capabilities of the AI system we’re talking about and then make it seem ridiculous to regulate. If I’m being charitable, my guess is that they assume that the bar for “enforce safety measures when stakes are sufficiently high” will be considerably lower than what makes sense/they’d prefer OR they want to wait until the risk is here, demonstrated and obvious until we do anything.
As someone who is pro-open-source, I do think that “AI isn’t useful for making bioweapons” is ultimately a losing argument, because AI is increasingly helpful at doing many different things, and I see no particular reason that the making-of-bioweapons would be an exception. However, that’s also true of many other technologies: good luck making your bioweapon without electric lighting, paper, computers, etc. It wouldn’t be reasonable to ban paper just because it’s handy in the lab notebook in a bioweapons lab.
What would be more persuasive is some evidence that AI is relatively more useful for making bioweapons than it is for doing things in general. It’s a bit hard for me to imagine that being the case, so if it turned out to be true, I’d need to reconsider my viewpoint.
What would be more persuasive is some evidence that AI is relatively more useful for making bioweapons than it is for doing things in general.
I see little reason to use that comparison rather than “will [category of AI models under consideration] improve offense (in bioterrorism, say) relative to defense?”
The open source community seems to consistently assume the case that the concerns are about current AI systems and the current systems are enough to lead to significant biorisk. Nobody serious is claiming this
I see a lot of rhetorical equivocation between risks from existing non-frontier AI systems, and risks from future frontier or even non-frontier AI systems. Just this week, an author of the new “Will releasing the weights of future large language models grant widespread access to pandemic agents?” paper was asserting that everyone on Earth has been harmed by the release of Llama2 (via increased biorisks, it seems). It is very unclear to me which future systems the AIS community would actually permit to be open-sourced, and I think that uncertainty is a substantial part of the worry from open-weight advocates.
I’m happy to see that comment being disagreed with. I think I could say they aren’t a truly serious person after saying that comment (I think the paper is fine), but let’s say that’s one serious person suggesting something vaguely to what I said above.
And I’m also frustrated at people within the AI Safety community who are either ambiguous about which models they are talking about (leads to posts like this and makes consensus harder). Even worse if it’s on purpose for rhetorical reasons.
Noted! I think there is substantial consensus within the AIS community on a central claim that the open-sourcing of certain future frontier AI systems might unacceptably increase biorisks. But I think there is not much consensus on a lot of other important claims, like about for which (future or even current) AI systems open-sourcing is acceptable and for which ones open-sourcing unacceptably increases biorisks.
I agree it would be nice to have strong categories or formalism pinning down which future systems would be safe to open source, but it seems an asymmetry in expected evidence to treat a non-consensus on systems which don’t exist yet as a pro-open-sourcing position. I think it’s fair to say there is enough of a consensus that we don’t know which future systems would be safe and so need more work to determine this before irreversible proliferation.
Exactly. I’m getting frustrated when we talk about risks from AI systems with the open source or e/acc communities. The open source community seems to consistently assume the case that the concerns are about current AI systems and the current systems are enough to lead to significant biorisk. Nobody serious is claiming this and it‘s not what I’m seeing in any policy document or paper. And this difference in starting points between the AI Safety community and open source community pretty much makes all the difference.
Sometimes I wonder if the open source community is making this assumption on purpose because it is rhetorically useful to say “oh you think a little chatbot that has the same information as a library would cause a global disaster?” It’s a common tactic these days, downplay the capabilities of the AI system we’re talking about and then make it seem ridiculous to regulate. If I’m being charitable, my guess is that they assume that the bar for “enforce safety measures when stakes are sufficiently high” will be considerably lower than what makes sense/they’d prefer OR they want to wait until the risk is here, demonstrated and obvious until we do anything.
As someone who is pro-open-source, I do think that “AI isn’t useful for making bioweapons” is ultimately a losing argument, because AI is increasingly helpful at doing many different things, and I see no particular reason that the making-of-bioweapons would be an exception. However, that’s also true of many other technologies: good luck making your bioweapon without electric lighting, paper, computers, etc. It wouldn’t be reasonable to ban paper just because it’s handy in the lab notebook in a bioweapons lab.
What would be more persuasive is some evidence that AI is relatively more useful for making bioweapons than it is for doing things in general. It’s a bit hard for me to imagine that being the case, so if it turned out to be true, I’d need to reconsider my viewpoint.
I see little reason to use that comparison rather than “will [category of AI models under consideration] improve offense (in bioterrorism, say) relative to defense?”
(explaining my disagree reaction)
I see a lot of rhetorical equivocation between risks from existing non-frontier AI systems, and risks from future frontier or even non-frontier AI systems. Just this week, an author of the new “Will releasing the weights of future large language models grant widespread access to pandemic agents?” paper was asserting that everyone on Earth has been harmed by the release of Llama2 (via increased biorisks, it seems). It is very unclear to me which future systems the AIS community would actually permit to be open-sourced, and I think that uncertainty is a substantial part of the worry from open-weight advocates.
I’m happy to see that comment being disagreed with. I think I could say they aren’t a truly serious person after saying that comment (I think the paper is fine), but let’s say that’s one serious person suggesting something vaguely to what I said above.
And I’m also frustrated at people within the AI Safety community who are either ambiguous about which models they are talking about (leads to posts like this and makes consensus harder). Even worse if it’s on purpose for rhetorical reasons.
Note that one of the main people pushing back against the comment you link is me, a member of the AI safety community.
Noted! I think there is substantial consensus within the AIS community on a central claim that the open-sourcing of certain future frontier AI systems might unacceptably increase biorisks. But I think there is not much consensus on a lot of other important claims, like about for which (future or even current) AI systems open-sourcing is acceptable and for which ones open-sourcing unacceptably increases biorisks.
I agree it would be nice to have strong categories or formalism pinning down which future systems would be safe to open source, but it seems an asymmetry in expected evidence to treat a non-consensus on systems which don’t exist yet as a pro-open-sourcing position. I think it’s fair to say there is enough of a consensus that we don’t know which future systems would be safe and so need more work to determine this before irreversible proliferation.