The problem is that the paper doesn’t do a cost benefit analysis and instead says “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic.”
The base rates that the last pandemic that was caused by a lab leak wasn’t even 50 years ago, suggests that at the time it wasn’t a very low probability. Then the paper goes on and calls for a generalized solution for the problem. The recommendation looks to a reader like “gain of function” research is a political topic and thus we can push for general safety solutions that are useful also in other areas where the risks are troubling to us.
A non-generalized solution would be to say: “No biosafety 3⁄4 labs in cities. Have them all in remote areas and require researchers leaving them to undergo a 14 day quarantine”. The fact that Baric developed his SARS 2 too outside of a biosafety 4 lab is mindboggling.
When speaking about asteroids as an X-risk speaking about very low probability makes sense. Not doing the cost benefit analysis and speaking of a very low probability gives people reading FHI papers a false impression of the risks.
Well, you may not like their approach, but the original argument you were making, I think, was that EA think tanks weren’t addressing this issue. This paper certainly dealt with the topic in more depth than the listicle, not that that’s saying much, and it did it 2-3 years earlier. Also it took me all of 10 seconds to find it. So again, can you be a little more precise in saying what you mean by “most EA think tanks?”
Or are you mainly saying that you’d have liked to have seen EA screaming at giant, obvious, institutional level against gain of function research, rather than writing some tidy policy papers?
Here’s a paper from FHI from 2016 on a cost benefit analysis of GoF research:
https://www.fhi.ox.ac.uk/wp-content/uploads/GoFv9-1.pdf
I don’t know how carefully you’ve quantified “most of the EA think tanks,” but maybe worth adding some precision here?
The problem is that the paper doesn’t do a cost benefit analysis and instead says “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic.”
The base rates that the last pandemic that was caused by a lab leak wasn’t even 50 years ago, suggests that at the time it wasn’t a very low probability. Then the paper goes on and calls for a generalized solution for the problem. The recommendation looks to a reader like “gain of function” research is a political topic and thus we can push for general safety solutions that are useful also in other areas where the risks are troubling to us.
A non-generalized solution would be to say: “No biosafety 3⁄4 labs in cities. Have them all in remote areas and require researchers leaving them to undergo a 14 day quarantine”. The fact that Baric developed his SARS 2 too outside of a biosafety 4 lab is mindboggling.
When speaking about asteroids as an X-risk speaking about very low probability makes sense. Not doing the cost benefit analysis and speaking of a very low probability gives people reading FHI papers a false impression of the risks.
Well, you may not like their approach, but the original argument you were making, I think, was that EA think tanks weren’t addressing this issue. This paper certainly dealt with the topic in more depth than the listicle, not that that’s saying much, and it did it 2-3 years earlier. Also it took me all of 10 seconds to find it. So again, can you be a little more precise in saying what you mean by “most EA think tanks?”
Or are you mainly saying that you’d have liked to have seen EA screaming at giant, obvious, institutional level against gain of function research, rather than writing some tidy policy papers?