Noted! I think there is substantial consensus within the AIS community on a central claim that the open-sourcing of certain future frontier AI systems might unacceptably increase biorisks. But I think there is not much consensus on a lot of other important claims, like about for which (future or even current) AI systems open-sourcing is acceptable and for which ones open-sourcing unacceptably increases biorisks.
I agree it would be nice to have strong categories or formalism pinning down which future systems would be safe to open source, but it seems an asymmetry in expected evidence to treat a non-consensus on systems which don’t exist yet as a pro-open-sourcing position. I think it’s fair to say there is enough of a consensus that we don’t know which future systems would be safe and so need more work to determine this before irreversible proliferation.
Note that one of the main people pushing back against the comment you link is me, a member of the AI safety community.
Noted! I think there is substantial consensus within the AIS community on a central claim that the open-sourcing of certain future frontier AI systems might unacceptably increase biorisks. But I think there is not much consensus on a lot of other important claims, like about for which (future or even current) AI systems open-sourcing is acceptable and for which ones open-sourcing unacceptably increases biorisks.
I agree it would be nice to have strong categories or formalism pinning down which future systems would be safe to open source, but it seems an asymmetry in expected evidence to treat a non-consensus on systems which don’t exist yet as a pro-open-sourcing position. I think it’s fair to say there is enough of a consensus that we don’t know which future systems would be safe and so need more work to determine this before irreversible proliferation.