I feel like this just happened? There was a good amount of articles written about this, see for example this article by the Global Priorities Project on GoF research:
I also remember a number of other articles by people working on biorisk, but would have to dig them up. But overall I had a sense there was both a bunch of funding and a bunch of advocacy and a bunch of research on this topic.
I searched LessWrong itself for “gain of function” and it didn’t bring up much. Searching for it on OpenPhil finds it mentioned a few times, so it seems that while OpenPhil got in contact with the topic they failed to identify it as a cause area that needs funding.
All the hits on OpenPhil are 2017 and before and 2018 the Trump administration ended the ban on gain of function research. That should have been a moment of public protest by our community.
Are you saying we should have been marching in the streets and putting up banners? Criticizing the end of the ban more in public? Or taking steps against it, somehow, using alternative mechanisms like advocacy with any contacts we might have in the world of virology?
The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That’s the OpenPhil procedere to deal with topics that matter.
That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something.
It’s not clear that funding Lipsitch would have been enough, but it would be on course with “we tried to do something with our toolkit”.
With research it’s hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.
In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi’s career given the importance of being seen as respectful in China.
Finding that out at the time would have required reading a lot of papers to understand what’s going on but I think it’s quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what’s happening might have caught it.
I do think they suggest the situation is better then I initially thought given that funding the Lipsitch /Johns Hopkins Center for Health Security is a good idea.
How could the problem eventually be solved or substantially alleviated? We believe that if a subset of the following abilities/resources were developed, the risk of a globally catastrophic pandemic would be substantially reduced:
A better selection of well-stocked, broad-spectrum antiviral compounds with low potential for development of resistance
Ability to confer immunity against a novel pathogen in fewer than 100 days
Widespread implementation of intrinsic biocontainment technologies that can reliably contain viral pathogens in the lab without impairing research
Improved countermeasures for non-viral conventional pathogens
Rapid, inexpensive, point-of-care diagnostics for all known pathogens
Inexpensive, ubiquitous metagenomic sequencing
Targeted countermeasures for the most dangerous viral pathogens
I do think that list is missing finding ways to reduce gain of function research and instead encourages gain of function research via funding “Targeted countermeasures for the most dangerous viral pathogens”.
Not talking about the tradeoffs between developing measures against viruses and the risk caused by gain of function research, seem to me a big omission. Not speaking about the dangers of gain of function research likely reduces conflicts with virologists.
The report suggests to me that the led themselves be conned by researchers who suggest that developing immunity against a novel pathogen in fewer than 100 days is about developing new vaccination platforms when it was mostly about regulation and finding ways to verifying drug safety in short amounts of time.
Fighting for changes in laws about drug regulation means to get in conflicts while funding vaccine platforms is conflictless.
Unsexy approaches like reducing the amount of surfaces touched by multiple people or researching better airfilters/humidifiers to reduce transmission of all viruses are also off the roadmap.
I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the lab leak hypothesis.
In general the paper only addresses the meta level of how to generally think about risks. What would have been required is to actually think about how high the risk is and communicate that it’s serious enough that other people should pay attention. The paper could have cited Marc Lipsitch’s risk assement in the introduction to frame the issue but instead talked about it in a more abstract way that doesn’t get the reader to think that the issue is worth paying attention.
It seems to falsely propogate the idea that the risk was very low by saying “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic” when the risk estimate that Marc Lipsitch made wasn’t in an order that anyone should consider low.
The paper seems like there was an opportunity to say something general on risk management and FHI used that to express their general ideas of risk management while failing at actually looking at the risk in question.
Just imagine someone saying about AI risk “Even a very low chance of AI killing all humans in unacceptable. We should get AI researchers and AI companies to buy insurance against the harm created by AI risk”. The paper isn’t any different then that.
I feel like this just happened? There was a good amount of articles written about this, see for example this article by the Global Priorities Project on GoF research:
http://globalprioritiesproject.org/wp-content/uploads/2016/03/GoFv9-3.pdf
I also remember a number of other articles by people working on biorisk, but would have to dig them up. But overall I had a sense there was both a bunch of funding and a bunch of advocacy and a bunch of research on this topic.
I searched LessWrong itself for “gain of function” and it didn’t bring up much. Searching for it on OpenPhil finds it mentioned a few times, so it seems that while OpenPhil got in contact with the topic they failed to identify it as a cause area that needs funding.
All the hits on OpenPhil are 2017 and before and 2018 the Trump administration ended the ban on gain of function research. That should have been a moment of public protest by our community.
Are you saying we should have been marching in the streets and putting up banners? Criticizing the end of the ban more in public? Or taking steps against it, somehow, using alternative mechanisms like advocacy with any contacts we might have in the world of virology?
The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That’s the OpenPhil procedere to deal with topics that matter.
That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something.
It’s not clear that funding Lipsitch would have been enough, but it would be on course with “we tried to do something with our toolkit”.
With research it’s hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.
In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi’s career given the importance of being seen as respectful in China.
Finding that out at the time would have required reading a lot of papers to understand what’s going on but I think it’s quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what’s happening might have caught it.
Some relevant links:
https://www.openphilanthropy.org/sites/default/files/Lipsitch%201-29-14%20%28public%29.pdf
OpenPhil conversation notes from 2014 with Lipsitch
https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/harvard-university-biosecurity-and-biosafety
Grant to Lipsitch in 2020 for ~$320k
https://www.openphilanthropy.org/giving/grants
All OpenPhil grants in the biosecurity area
Don’t think they prove anything, but seem useful references.
I do think they suggest the situation is better then I initially thought given that funding the Lipsitch /Johns Hopkins Center for Health Security is a good idea.
I read through their report Research and Development to Decrease Biosecurity Risks from Viral Pathogens:
I do think that list is missing finding ways to reduce gain of function research and instead encourages gain of function research via funding “Targeted countermeasures for the most dangerous viral pathogens”.
Not talking about the tradeoffs between developing measures against viruses and the risk caused by gain of function research, seem to me a big omission. Not speaking about the dangers of gain of function research likely reduces conflicts with virologists.
The report suggests to me that the led themselves be conned by researchers who suggest that developing immunity against a novel pathogen in fewer than 100 days is about developing new vaccination platforms when it was mostly about regulation and finding ways to verifying drug safety in short amounts of time.
Fighting for changes in laws about drug regulation means to get in conflicts while funding vaccine platforms is conflictless.
Unsexy approaches like reducing the amount of surfaces touched by multiple people or researching better airfilters/humidifiers to reduce transmission of all viruses are also off the roadmap.
I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the lab leak hypothesis.
In general the paper only addresses the meta level of how to generally think about risks. What would have been required is to actually think about how high the risk is and communicate that it’s serious enough that other people should pay attention. The paper could have cited Marc Lipsitch’s risk assement in the introduction to frame the issue but instead talked about it in a more abstract way that doesn’t get the reader to think that the issue is worth paying attention.
It seems to falsely propogate the idea that the risk was very low by saying “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic” when the risk estimate that Marc Lipsitch made wasn’t in an order that anyone should consider low.
The paper seems like there was an opportunity to say something general on risk management and FHI used that to express their general ideas of risk management while failing at actually looking at the risk in question.
Just imagine someone saying about AI risk “Even a very low chance of AI killing all humans in unacceptable. We should get AI researchers and AI companies to buy insurance against the harm created by AI risk”. The paper isn’t any different then that.