[Added April 28th: In case someone reads my comment without this context: David has made a number of worthwhile contributions to discussions of biological existential risks (e.g. 1, 2, 3) as well as worked professionally in this area and his contributions on this topic are quite often well-worth engaging with. Here I just intended to add that in my opinion early on in the covid pandemic he messed up pretty badly in one or two critical discussions around mask effectiveness and censoring criticism of the CDC. Perhaps that’s not saying much because the base rate for relevant experts dealing with Covid is also that they were very off-the-mark. Furthermore David’s June 2020 post-mortem of his mistakes was a good public service even while I don’t agree with his self-assessment in all cases. Overall I think his arguments are often well-worth engaging with.]
I’m not in touch with the ground truth in this case, but for those reading along without knowing the context, I’ll mention that it wouldn’t be the first time that David has misrepresented what people in the Effective Altruism Biorisk professional network believe[1].
(I will mention that David later apologized for handling that situation poorly and wasting people’s time[2], which I think reflects positively on him.)
See Habryka’s response to Davidmanheim’s comment here from March 7th 2020, such as this quote.
Overall, my sense is that you made a prediction that people in biorisk would consider this post an infohazard that had to be prevented from spreading (you also reported this post to the admins, saying that we should “talk to someone who works in biorisk at at FHI, Openphil, etc. to confirm that this is a really bad idea”).
We have now done so, and in this case others did not share your assessment (and I expect most other experts would give broadly the same response).
My guess is more that we were talking past each other than that his intended claim was false/unrepresentative. I do think it’s true that EA’s mostly talk about people doing gain of function research as the problem, rather than about the insufficiency of the safeguards; I just think the latter is why the former is a problem.
The OP claimed a failure of BSL levels was the single thing that induced biorisk as a cause area, and I said that was a confused claim. Feel free to find someone who disagrees with me here, but the proximate causes of EAs worrying about biorisk has nothing to do with BSL lab designations. It’s not BSL levels that failed in allowing things like the soviet bioweapons program, or led to the underfunded and largely unenforceable BWC, or the way that newer technologies are reducing the barriers to terrorists and other being able to pursue bioweapons.
I think we must still be missing each other somehow. To reiterate, I’m aware that there is non-accidental biorisk, for which one can hardly blame the safety measures. But there is also accident risk, since labs often fail to contain pathogens even when they’re trying to.
Having written extensively about it, I promise you I’m aware. But please, tell me more about how this supports the original claim which I have been disagreeing with, that these class of incidents were or are the primary concern of the EA biosecurity community, the one that led to it being a cause area.
I agree there other problems the EA biosecurity community focuses on, but surely lab escapes are one of those problems, and part of the reason we need biosecurity measures? In any case, this disagreement seems beside the main point that I took Adam to be making, namely that the track record for defining appropriate units of risk for poorly understood, high attack surface domains is quite bad (as with BSL). This still seems true to me.
BSL isn’t the thing that defines “appropriate units of risk”, that’s pathogen risk-group levels, and I agree that those are are problem because they focus on pathogen lists rather than actual risks. I actually think BSL are good at what they do, and the problem is regulation and oversight, which is patchy, as well as transparency, of which there is far too little. But those are issues with oversight, not with the types of biosecurity measure that are available.
This thread isn’t seeming very productive to me, so I’m going to bow out after this. But yes, it is a primary concern—at least in the case of Open Philanthropy, it’s easy to check what their primary concerns are because they write them up. And accidental release from dual use research is one of them.
And you’ve now equivocated between “they’ve induced an EA cause area” and a list of the range of risks covered by biosecurity—not what their primary concerns are—and citing this as “one of them.” I certainly agree that biosecurity levels are one of the things biosecurity is about, and that “the possibility of accidental deployment of biological agents” is a key issue, but that’s incredibly far removed from the original claim that the failure of BSL levels induced the cause area!
[Added April 28th: In case someone reads my comment without this context: David has made a number of worthwhile contributions to discussions of biological existential risks (e.g. 1, 2, 3) as well as worked professionally in this area and his contributions on this topic are quite often well-worth engaging with. Here I just intended to add that in my opinion early on in the covid pandemic he messed up pretty badly in one or two critical discussions around mask effectiveness and censoring criticism of the CDC. Perhaps that’s not saying much because the base rate for relevant experts dealing with Covid is also that they were very off-the-mark. Furthermore David’s June 2020 post-mortem of his mistakes was a good public service even while I don’t agree with his self-assessment in all cases. Overall I think his arguments are often well-worth engaging with.]
I’m not in touch with the ground truth in this case, but for those reading along without knowing the context, I’ll mention that it wouldn’t be the first time that David has misrepresented what people in the Effective Altruism Biorisk professional network believe[1].
(I will mention that David later apologized for handling that situation poorly and wasting people’s time[2], which I think reflects positively on him.)
See Habryka’s response to Davidmanheim’s comment here from March 7th 2020, such as this quote.
See David’s own June 25th reply to the same comment.
My guess is more that we were talking past each other than that his intended claim was false/unrepresentative. I do think it’s true that EA’s mostly talk about people doing gain of function research as the problem, rather than about the insufficiency of the safeguards; I just think the latter is why the former is a problem.
The OP claimed a failure of BSL levels was the single thing that induced biorisk as a cause area, and I said that was a confused claim. Feel free to find someone who disagrees with me here, but the proximate causes of EAs worrying about biorisk has nothing to do with BSL lab designations. It’s not BSL levels that failed in allowing things like the soviet bioweapons program, or led to the underfunded and largely unenforceable BWC, or the way that newer technologies are reducing the barriers to terrorists and other being able to pursue bioweapons.
I think we must still be missing each other somehow. To reiterate, I’m aware that there is non-accidental biorisk, for which one can hardly blame the safety measures. But there is also accident risk, since labs often fail to contain pathogens even when they’re trying to.
Having written extensively about it, I promise you I’m aware. But please, tell me more about how this supports the original claim which I have been disagreeing with, that these class of incidents were or are the primary concern of the EA biosecurity community, the one that led to it being a cause area.
I agree there other problems the EA biosecurity community focuses on, but surely lab escapes are one of those problems, and part of the reason we need biosecurity measures? In any case, this disagreement seems beside the main point that I took Adam to be making, namely that the track record for defining appropriate units of risk for poorly understood, high attack surface domains is quite bad (as with BSL). This still seems true to me.
BSL isn’t the thing that defines “appropriate units of risk”, that’s pathogen risk-group levels, and I agree that those are are problem because they focus on pathogen lists rather than actual risks. I actually think BSL are good at what they do, and the problem is regulation and oversight, which is patchy, as well as transparency, of which there is far too little. But those are issues with oversight, not with the types of biosecurity measure that are available.
This thread isn’t seeming very productive to me, so I’m going to bow out after this. But yes, it is a primary concern—at least in the case of Open Philanthropy, it’s easy to check what their primary concerns are because they write them up. And accidental release from dual use research is one of them.
If you’re appealing to OpenPhil, it might be useful to ask one of the people who was working with them on this as well.
And you’ve now equivocated between “they’ve induced an EA cause area” and a list of the range of risks covered by biosecurity—not what their primary concerns are—and citing this as “one of them.” I certainly agree that biosecurity levels are one of the things biosecurity is about, and that “the possibility of accidental deployment of biological agents” is a key issue, but that’s incredibly far removed from the original claim that the failure of BSL levels induced the cause area!