(This isn’t an attempt to answer the question, but…) My best guess is that info hazard concerns reduced the amount of discourse on GoF research to some extent.
Can you be more specific? My vague impression is that if GoF research is already happening, talking about GoF research isn’t likely to be an info hazard because the info is already in the heads of the people in whose heads it’s hazardous.
The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence.
It’s possible that gain of function research is therefore mentally associated as being an infohazard. The more recent FHI papers for example mention gain of function research only in relation to infohazards and not the problem of lab leaks in labs doing gain of function research.
The OpenPhil analysis that speaks of gain of function research by calling it dual use research also has a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.
This seems to reflect the general human bias that we have an easier time imagining other humans intentionally creating harm then accidentially creating harm. It’s quite similar to naive people thinking that the problem of AGI is humans using AGI’s for nefarious ends.
(I’m not sure to what extent you’re trying to “give background info” versus “be more specific about how people thought of GoF research as an infohazard” versus “be more specific about how GoF research actually was an infohazard” versus other things, so I might be talking past you a bit here.)
The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence.
So this seems to me likely to be an infohazard that was found through GoF research, but not obviously GoF-research-as-infohazard. That is, even if we grant that the modified sequence was an infohazard and a mistake to publish, it doesn’t then follow that it’s a mistake to talk about GoF research in general. Because when GoF research is already happening, it’s already known within certain circles, and those circles disproportionately contain the people we’d want to keep the knowledge from. It might be the case that talking about GoF research is a mistake, but it’s not obviously so.
What I’m trying to get at is that “info hazard concerns” is pretty vague and not very helpful. What were people concerned about, specifically, and was it a reasonable thing to be concerned about? (It’s entirely possible that people made the mental leap from “this thing found through GoF is an infohazard” to “GoF is an infohazard”, but if so it seems important to realize that that’s a leap.)
a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.
Here, too: if this is what we’re worried about, it’s not clear that “not talking about GoF research” helps the problem at all.
Now (after all the COVID-19 related discourse in the media), it indeed seems a lot less risky to mention GoF research. (You could have made the point that “GoF research is already happening” prior to COVID-19; but perhaps a very small fraction of people then were aware that GoF research was a thing, making it riskier to mention).
I agree probably only a small fraction of people were aware that GoF research was a thing until recently. I would assume that fraction included most of the people who were capable of acting on the knowledge. (That is, the question isn’t “what fraction of people know about GoF research” but “what fraction of people who are plausibly capable of causing GoF research to happen know about it”.) But maybe that depends on the specific way you think it’s hazardous .
(This isn’t an attempt to answer the question, but…) My best guess is that info hazard concerns reduced the amount of discourse on GoF research to some extent.
Can you be more specific? My vague impression is that if GoF research is already happening, talking about GoF research isn’t likely to be an info hazard because the info is already in the heads of the people in whose heads it’s hazardous.
The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence.
It’s possible that gain of function research is therefore mentally associated as being an infohazard. The more recent FHI papers for example mention gain of function research only in relation to infohazards and not the problem of lab leaks in labs doing gain of function research.
The OpenPhil analysis that speaks of gain of function research by calling it dual use research also has a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.
This seems to reflect the general human bias that we have an easier time imagining other humans intentionally creating harm then accidentially creating harm. It’s quite similar to naive people thinking that the problem of AGI is humans using AGI’s for nefarious ends.
(I’m not sure to what extent you’re trying to “give background info” versus “be more specific about how people thought of GoF research as an infohazard” versus “be more specific about how GoF research actually was an infohazard” versus other things, so I might be talking past you a bit here.)
So this seems to me likely to be an infohazard that was found through GoF research, but not obviously GoF-research-as-infohazard. That is, even if we grant that the modified sequence was an infohazard and a mistake to publish, it doesn’t then follow that it’s a mistake to talk about GoF research in general. Because when GoF research is already happening, it’s already known within certain circles, and those circles disproportionately contain the people we’d want to keep the knowledge from. It might be the case that talking about GoF research is a mistake, but it’s not obviously so.
What I’m trying to get at is that “info hazard concerns” is pretty vague and not very helpful. What were people concerned about, specifically, and was it a reasonable thing to be concerned about? (It’s entirely possible that people made the mental leap from “this thing found through GoF is an infohazard” to “GoF is an infohazard”, but if so it seems important to realize that that’s a leap.)
Here, too: if this is what we’re worried about, it’s not clear that “not talking about GoF research” helps the problem at all.
Now (after all the COVID-19 related discourse in the media), it indeed seems a lot less risky to mention GoF research. (You could have made the point that “GoF research is already happening” prior to COVID-19; but perhaps a very small fraction of people then were aware that GoF research was a thing, making it riskier to mention).
I agree probably only a small fraction of people were aware that GoF research was a thing until recently. I would assume that fraction included most of the people who were capable of acting on the knowledge. (That is, the question isn’t “what fraction of people know about GoF research” but “what fraction of people who are plausibly capable of causing GoF research to happen know about it”.) But maybe that depends on the specific way you think it’s hazardous .