The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That’s the OpenPhil procedere to deal with topics that matter.
That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something.
It’s not clear that funding Lipsitch would have been enough, but it would be on course with “we tried to do something with our toolkit”.
With research it’s hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.
In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi’s career given the importance of being seen as respectful in China.
Finding that out at the time would have required reading a lot of papers to understand what’s going on but I think it’s quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what’s happening might have caught it.
I do think they suggest the situation is better then I initially thought given that funding the Lipsitch /Johns Hopkins Center for Health Security is a good idea.
How could the problem eventually be solved or substantially alleviated? We believe that if a subset of the following abilities/resources were developed, the risk of a globally catastrophic pandemic would be substantially reduced:
A better selection of well-stocked, broad-spectrum antiviral compounds with low potential for development of resistance
Ability to confer immunity against a novel pathogen in fewer than 100 days
Widespread implementation of intrinsic biocontainment technologies that can reliably contain viral pathogens in the lab without impairing research
Improved countermeasures for non-viral conventional pathogens
Rapid, inexpensive, point-of-care diagnostics for all known pathogens
Inexpensive, ubiquitous metagenomic sequencing
Targeted countermeasures for the most dangerous viral pathogens
I do think that list is missing finding ways to reduce gain of function research and instead encourages gain of function research via funding “Targeted countermeasures for the most dangerous viral pathogens”.
Not talking about the tradeoffs between developing measures against viruses and the risk caused by gain of function research, seem to me a big omission. Not speaking about the dangers of gain of function research likely reduces conflicts with virologists.
The report suggests to me that the led themselves be conned by researchers who suggest that developing immunity against a novel pathogen in fewer than 100 days is about developing new vaccination platforms when it was mostly about regulation and finding ways to verifying drug safety in short amounts of time.
Fighting for changes in laws about drug regulation means to get in conflicts while funding vaccine platforms is conflictless.
Unsexy approaches like reducing the amount of surfaces touched by multiple people or researching better airfilters/humidifiers to reduce transmission of all viruses are also off the roadmap.
The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That’s the OpenPhil procedere to deal with topics that matter.
That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something.
It’s not clear that funding Lipsitch would have been enough, but it would be on course with “we tried to do something with our toolkit”.
With research it’s hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.
In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi’s career given the importance of being seen as respectful in China.
Finding that out at the time would have required reading a lot of papers to understand what’s going on but I think it’s quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what’s happening might have caught it.
Some relevant links:
https://www.openphilanthropy.org/sites/default/files/Lipsitch%201-29-14%20%28public%29.pdf
OpenPhil conversation notes from 2014 with Lipsitch
https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/harvard-university-biosecurity-and-biosafety
Grant to Lipsitch in 2020 for ~$320k
https://www.openphilanthropy.org/giving/grants
All OpenPhil grants in the biosecurity area
Don’t think they prove anything, but seem useful references.
I do think they suggest the situation is better then I initially thought given that funding the Lipsitch /Johns Hopkins Center for Health Security is a good idea.
I read through their report Research and Development to Decrease Biosecurity Risks from Viral Pathogens:
I do think that list is missing finding ways to reduce gain of function research and instead encourages gain of function research via funding “Targeted countermeasures for the most dangerous viral pathogens”.
Not talking about the tradeoffs between developing measures against viruses and the risk caused by gain of function research, seem to me a big omission. Not speaking about the dangers of gain of function research likely reduces conflicts with virologists.
The report suggests to me that the led themselves be conned by researchers who suggest that developing immunity against a novel pathogen in fewer than 100 days is about developing new vaccination platforms when it was mostly about regulation and finding ways to verifying drug safety in short amounts of time.
Fighting for changes in laws about drug regulation means to get in conflicts while funding vaccine platforms is conflictless.
Unsexy approaches like reducing the amount of surfaces touched by multiple people or researching better airfilters/humidifiers to reduce transmission of all viruses are also off the roadmap.