Here is a data point not directly relevant to Less Wrong, but perhaps to the broader rationality community:
Around this time, Marc Lipsitch organized a website and an open letter warning publicly about the dangers of gain-of-function research. I was a doctoral student at HSPH at the time, and shared this information with a few rationalist-aligned organizations. I remember making an offer to introduce them to Prof. Lipsitch, so that maybe he could give a talk. I got the impression that the Future of Life Institute had some communication with him, and I see from their 2015 newsletter that there is some discussion of his work, but I am not sure if anything more concrete came out of of this
My impression was that while they considered this important, this was more of a catastrophic risk than an existential risk, and therefore outside their core mission.
While this crisis was a catastrophe and no existential challenge, it’s unclear why that has to be generally the case.
The claim that global catastrophic risk isn’t part of the FLI mission seems strange to me. It’s the thing the Global Priorities Project of CEA focus on (global catastrophic risk is more primarily mentioned on in the Global Priorities Project then X-risk).
FLI does say on it’s website that out of five areas one of them is:
Biotechnology and genetics often inspire as much fear as excitement, as people worry about the possibly negative effects of cloning, gene splicing, gene drives, and a host of other genetics-related advancements. While biotechnology provides incredible opportunity to save and improve lives, it also increases existential risks associated with manufactured pandemics and loss of genetic diversity.
It seems to me like an analysis that treats cloning (and climate change) as an X-risk but not gain of function research is seriously flawed.
It does seem to me that the messed up in a major way and should do the 5 Why’s just like OpenPhil should be required to do it.
Having climate change as X-risk but not gain of function research suggests too much trust in experts and doing what’s politically convienent instead of fighting the battles that are important. This was easy mode and they messed up.
Donors to both donations should request analysis of what went wrong.
He only addresses it indirectly by saying we shouldn’t develop very targeted approaches (which is what gain of function research is about) and instead fund interventions that are more broad. The talk doesn’t mention the specific risk of gain of function research.
Here is a data point not directly relevant to Less Wrong, but perhaps to the broader rationality community:
Around this time, Marc Lipsitch organized a website and an open letter warning publicly about the dangers of gain-of-function research. I was a doctoral student at HSPH at the time, and shared this information with a few rationalist-aligned organizations. I remember making an offer to introduce them to Prof. Lipsitch, so that maybe he could give a talk. I got the impression that the Future of Life Institute had some communication with him, and I see from their 2015 newsletter that there is some discussion of his work, but I am not sure if anything more concrete came out of of this
My impression was that while they considered this important, this was more of a catastrophic risk than an existential risk, and therefore outside their core mission.
While this crisis was a catastrophe and no existential challenge, it’s unclear why that has to be generally the case.
The claim that global catastrophic risk isn’t part of the FLI mission seems strange to me. It’s the thing the Global Priorities Project of CEA focus on (global catastrophic risk is more primarily mentioned on in the Global Priorities Project then X-risk).
FLI does say on it’s website that out of five areas one of them is:
It seems to me like an analysis that treats cloning (and climate change) as an X-risk but not gain of function research is seriously flawed.
It does seem to me that the messed up in a major way and should do the 5 Why’s just like OpenPhil should be required to do it.
Having climate change as X-risk but not gain of function research suggests too much trust in experts and doing what’s politically convienent instead of fighting the battles that are important. This was easy mode and they messed up.
Donors to both donations should request analysis of what went wrong.
Here is a video of Prof. Lipsitch at EA Global Boston in 2017. I haven’t watched it yet, but I would expect him to discuss gain-of-function research: https://forum.effectivealtruism.org/posts/oKwg3Zs5DPDFXvSKC/marc-lipsitch-preventing-catastrophic-risks-by-mitigating
He only addresses it indirectly by saying we shouldn’t develop very targeted approaches (which is what gain of function research is about) and instead fund interventions that are more broad. The talk doesn’t mention the specific risk of gain of function research.