While I wasn’t at 80% of a lab leak when Eliezer asseted it a month ago, I’m now at 90%. It will take a while till it filters through society but I feel like we can already look at what we ourselves got wrong.
In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI. At the time there was a public debate about gain of function research. On editoral described the risks of gain of function research as:
Insurers and risk analysts define risk as the product of probability times consequence. Data on the probability of a laboratory-associated infection in U.S. BSL3 labs using select agents show that 4 infections have been observed over <2,044 laboratory-years of observation, indicating at least a 0.2% chance of a laboratory-acquired infection (5) per BSL3 laboratory-year. An alternative data source is from the intramural BSL3 labs at the National Institutes of Allergy and Infectious Diseases (NIAID), which report in a slightly different way: 3 accidental infections in 634,500 person-hours of work between 1982 and 2003, or about 1 accidental infection for every 100 full-time person-years (2,000 h) of work (6).
A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.
Even at the lower bar of 0.05% per full-time worker-year it seems crazy that society continued playing Russian Roulette. We could have seen the issue and protested. EA’s could have created organizations to fight against gain-of-function research. Why didn’t we speak every Petrov day about the necessity to stop gain of function research? Organizations like OpenPhil should go through the 5 Why’s and model why they messed this up and didn’t fund the cause. What needs to change so that we as rationalists and EA’s are able to organize to fight against tractable risks that our society takes without good reason?
I feel like this just happened? There was a good amount of articles written about this, see for example this article by the Global Priorities Project on GoF research:
http://globalprioritiesproject.org/wp-content/uploads/2016/03/GoFv9-3.pdf
I also remember a number of other articles by people working on biorisk, but would have to dig them up. But overall I had a sense there was both a bunch of funding and a bunch of advocacy and a bunch of research on this topic.
I searched LessWrong itself for “gain of function” and it didn’t bring up much. Searching for it on OpenPhil finds it mentioned a few times, so it seems that while OpenPhil got in contact with the topic they failed to identify it as a cause area that needs funding.
All the hits on OpenPhil are 2017 and before and 2018 the Trump administration ended the ban on gain of function research. That should have been a moment of public protest by our community.
Are you saying we should have been marching in the streets and putting up banners? Criticizing the end of the ban more in public? Or taking steps against it, somehow, using alternative mechanisms like advocacy with any contacts we might have in the world of virology?
The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That’s the OpenPhil procedere to deal with topics that matter.
That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something.
It’s not clear that funding Lipsitch would have been enough, but it would be on course with “we tried to do something with our toolkit”.
With research it’s hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.
In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi’s career given the importance of being seen as respectful in China.
Finding that out at the time would have required reading a lot of papers to understand what’s going on but I think it’s quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what’s happening might have caught it.
Some relevant links:
https://www.openphilanthropy.org/sites/default/files/Lipsitch%201-29-14%20%28public%29.pdf
OpenPhil conversation notes from 2014 with Lipsitch
https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/harvard-university-biosecurity-and-biosafety
Grant to Lipsitch in 2020 for ~$320k
https://www.openphilanthropy.org/giving/grants
All OpenPhil grants in the biosecurity area
Don’t think they prove anything, but seem useful references.
I do think they suggest the situation is better then I initially thought given that funding the Lipsitch /Johns Hopkins Center for Health Security is a good idea.
I read through their report Research and Development to Decrease Biosecurity Risks from Viral Pathogens:
I do think that list is missing finding ways to reduce gain of function research and instead encourages gain of function research via funding “Targeted countermeasures for the most dangerous viral pathogens”.
Not talking about the tradeoffs between developing measures against viruses and the risk caused by gain of function research, seem to me a big omission. Not speaking about the dangers of gain of function research likely reduces conflicts with virologists.
The report suggests to me that the led themselves be conned by researchers who suggest that developing immunity against a novel pathogen in fewer than 100 days is about developing new vaccination platforms when it was mostly about regulation and finding ways to verifying drug safety in short amounts of time.
Fighting for changes in laws about drug regulation means to get in conflicts while funding vaccine platforms is conflictless.
Unsexy approaches like reducing the amount of surfaces touched by multiple people or researching better airfilters/humidifiers to reduce transmission of all viruses are also off the roadmap.
I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the lab leak hypothesis.
In general the paper only addresses the meta level of how to generally think about risks. What would have been required is to actually think about how high the risk is and communicate that it’s serious enough that other people should pay attention. The paper could have cited Marc Lipsitch’s risk assement in the introduction to frame the issue but instead talked about it in a more abstract way that doesn’t get the reader to think that the issue is worth paying attention.
It seems to falsely propogate the idea that the risk was very low by saying “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic” when the risk estimate that Marc Lipsitch made wasn’t in an order that anyone should consider low.
The paper seems like there was an opportunity to say something general on risk management and FHI used that to express their general ideas of risk management while failing at actually looking at the risk in question.
Just imagine someone saying about AI risk “Even a very low chance of AI killing all humans in unacceptable. We should get AI researchers and AI companies to buy insurance against the harm created by AI risk”. The paper isn’t any different then that.
Here is a data point not directly relevant to Less Wrong, but perhaps to the broader rationality community:
Around this time, Marc Lipsitch organized a website and an open letter warning publicly about the dangers of gain-of-function research. I was a doctoral student at HSPH at the time, and shared this information with a few rationalist-aligned organizations. I remember making an offer to introduce them to Prof. Lipsitch, so that maybe he could give a talk. I got the impression that the Future of Life Institute had some communication with him, and I see from their 2015 newsletter that there is some discussion of his work, but I am not sure if anything more concrete came out of of this
My impression was that while they considered this important, this was more of a catastrophic risk than an existential risk, and therefore outside their core mission.
While this crisis was a catastrophe and no existential challenge, it’s unclear why that has to be generally the case.
The claim that global catastrophic risk isn’t part of the FLI mission seems strange to me. It’s the thing the Global Priorities Project of CEA focus on (global catastrophic risk is more primarily mentioned on in the Global Priorities Project then X-risk).
FLI does say on it’s website that out of five areas one of them is:
It seems to me like an analysis that treats cloning (and climate change) as an X-risk but not gain of function research is seriously flawed.
It does seem to me that the messed up in a major way and should do the 5 Why’s just like OpenPhil should be required to do it.
Having climate change as X-risk but not gain of function research suggests too much trust in experts and doing what’s politically convienent instead of fighting the battles that are important. This was easy mode and they messed up.
Donors to both donations should request analysis of what went wrong.
Here is a video of Prof. Lipsitch at EA Global Boston in 2017. I haven’t watched it yet, but I would expect him to discuss gain-of-function research: https://forum.effectivealtruism.org/posts/oKwg3Zs5DPDFXvSKC/marc-lipsitch-preventing-catastrophic-risks-by-mitigating
He only addresses it indirectly by saying we shouldn’t develop very targeted approaches (which is what gain of function research is about) and instead fund interventions that are more broad. The talk doesn’t mention the specific risk of gain of function research.
I can’t speak for less wrong as a whole, but I looked into this a little bit around that time, and concluded that actually it looked like things were heading in the sensible direction. In particular, towards the end of 2014, the US government stopped funding gain of function research: https://www.nature.com/articles/514411a, and there seemed to be a growing consensus/understanding that it was dangerous. think anyone doing (at least surface level) research in 2014/early 2015 could have reasonably concluded that this wasn’t a neglected area. That does leave open the question of what I did wrong in not noticing that the moratorium was lifted 3 years later...
It seems that when there’s a discussion of a dangerous practice being stopped pending safety review it makes sense to shedule into the future a moment to review how the safety review turned out.
Maybe a way forward would be:
Whenever there’s something done by a lot of scientists is categorically stopped pending safety review, make a metaculus question about how the safety review is likely to turn out.
That way when the safety review turns out negatively, it triggers an event that’s seen by a bunch of people who can then write a LessWrong post about it?
That leaves the question whether there are any comparable moratoriums out there that we should look at more.
Eliezer seemed to think that the ban on funding for gain of function research in the US simply led to research grants going to labs outside the US (Wuhan Institute of Virology in particular). he doesn’t really cite any sources here so I can’t do much to fact check his hypothesis.
Upon further googling, this gets murkier. Here’s a very good article that goes into depth about what the NIH did and didn’t fund at WIV and whether such research counts as “gain of function research”.
Some quotes from the article:
...
...
...
There are differing opinions on whether or not what the researchers at WIV did counts as gain of function research:
So to summarize: from what we know, researchers at WIV inserted a spike protein from a naturally occuring coronavirus into another coronavirus that was capable of replicating in a lab and infecting human cells. But the genome of this resulting virus seems too different from that of coronavirus for it to have been a direct ancestor of the pandemic causing coronavirus.
Overall I don’t feel like enough people are linking their sources when they make statements like “I’d give the lab leak hypothesis a probability of X%”.
I think Eliezer ignores how important prestige is for the Chinese. We got them to outlaw human cloning by telling them that doing it would put the Chinese academic community in a bad light.
We likely could have done the same with gain of function research. Having their first biosafety level 4 lab for the Chinese likely was mostly about prestige. Having no biosafety 4 labs while a lot of other countries had biosafety 4 labs wasn’t something that was okay for the Chinese because it suggests that they aren’t advanced enough.
I do think that it would be possible to make a deal that gives China the prestige for their scientists that they want without having to endanger everyone for it.
The Chinese took their database with the database about all the viruses that they had in their possession down in September 26 2019. In their own words they took it down because of a hacking attack during the pandemic (which suggests that starts for them somewhere in September). If we would have the database we likely would find a more related virus in it. Given that the point of creating the database in the first place was to help us in a coronavirus pandemic taking it down and not giving it to anyone is a clear sign that there’s something that would implicate them.
Basically, people outside of the virology community told them that they have to stop after exposing 75 CDC scientists to anthrax and a few weeks later other scientists finding a few vials of small pox in their freezer.
The reaction of the virology community was to redefine what gain of function research happens to be and continue endangering everyone.
It’s like Wall Street people when asked whether they do insider training saying: “According to our definition of what insider training means we didn’t”.
I have written all my sources up at https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis
Wow, this is quite the post! I’ve been looking for a post like this on LessWrong going over the lab leak hypothesis and the evidence for and against it, but I must have missed this one when you posted it.
I have to say, this looks pretty bad. I think I still have a major blindspot, which is I’ve read much more about the details of the lab leak hypothesis than I have about the natural origin hypothesis, so I still don’t feel like I can judge the relative strength of the two. That being said I think it is looking more and more likely that the virus was probably engineered while doing research and accidentally leaked from the lab.
Thanks for writing this up. I’m surprised more of this info doesn’t show up in other articles I’ve read on the origins of the pandemic.
I was too when I researched it. I think it’s telling us something about the amount of effort went into narrative control.
Take for example Huang Yanling, who was in the start of the pandemic called “patient zero” till someone discovered that she works at the Wuhan Institute of Virology and the Chinese started censoring information about her. The fact that the NIH asked the EcoHealth alliance about where Huang Yanling is suggest that the US government (that has CIA/NSA who wiretap a lot and hack people to try to get some idea what’s going on) does consider this to be an important piece of information.
Why doesn’t the name appear in the NewYorkTimes? Very odd...
It seems impossible for a simple he-said/she-said article about the questions from the NIH to EcoHealth to appear in any of the major publications.
After reading more it seems that according to John Holdren (Head of the
Office of Science and Technology Policy) the Chinese came to US politicians to discuss how topics like gain of function research should be regulated:
China’s leaders aren’t completely irresponsible. They messed up in Wuhan by allowing the lab to without enough trained personal to operate it safely but I would expect that it’s a combination of goals to have the lab on the one hand and the information about the security issues not going to the right people because the people who are responsible for the lab don’t want to look bad.
I doubt that Xi Jinping knew that he had a biosafety 4 lab without enough trained personal to be run safely.
I think the fact that mistakes like this are so understandable is precisely why gain of function research is dangerous. One mistake can lead to a multi-year pandemic and kill 10 million people. With those stakes, I don’t think anyone should be doing gain of function research that could lead to human deaths if pathogens escaped.
I found the original website for Prof. Lipsitch’s “Cambridge Working Group” from 2014 at http://www.cambridgeworkinggroup.org/ . While the website does not focus exclusively on gain-of-function, this was certainly a recurring theme in his public talks about this.
The list of signatories (which I believe has not been updated since 2016) includes several members of our community (apologies to anyone who I have missed):
Toby Ord, Oxford University
Sean O hEigeartaigh, University of Oxford
Daniel Dewey, University of Oxford
Anders Sandberg, Oxford University
Anders Huitfeldt, Harvard T.H. Chan School of Public Health
Viktoriya Krakovna, Harvard University PhD student
Dr. Roman V. Yampolskiy, University of Louisville
David Manheim, 1DaySooner
Interestingly, there was an opposing group arguing in favor of this kind of research, at http://www.scientistsforscience.org/. I do not recognize a single name on their list of signatories
That’s interesting. That leaves the question of why the FHI mostly stopped caring about it after 2016.
Past that point https://www.fhi.ox.ac.uk/wp-content/uploads/Lewis_et_al-2019-Risk_Analysis.pdf and https://www.fhi.ox.ac.uk/wp-content/uploads/C-Nelson-Engineered-Pathogens.pdf seem to be about gain of function research while completely ignoring the issue of potential lab leaks and only talking about it as an interesting biohazard topic.
My best guess is that it’s like in math where applied researchers are lower status then theoretical researchers and thus everyone wants to be seen as addressing the theoretical issues.
Infohazards are a great theoretical topic, discussing generalized methods to let researchers buy insurance for side effects of their research is a great theoretical topic as well.
Given that Lipitch didn’t talk directly about the gain of function research but tried to talk on a higher level to speak about more generalized solutions at EA Global Boston in 2017 he might have also felt social pressure to talk about the issue in a more theoretical manner then in a more applied manner where he told people about the risks of gain of function research.
If we would have instead said on stage at EA Global Boston in 2017 “I believe that the risk of gain of function research is between 0.05% and 0.6% per fulltime researcher” this would have been awkward and create conflict that’s uncomfortable. Talking about it in a more theoretical manner on the other hand allow a listener just to say “He Lipitch seems like a really smart guy”.
I don’t want to say that as a critique of Lipitch, given that he actually did the best work. I however do think EA Global having a social structure that gets people to act that way is a systematic flaw.
What do you think about that thesis?