My best guess is that it’s like in math where applied researchers are lower status then theoretical researchers and thus everyone wants to be seen as addressing the theoretical issues.
Infohazards are a great theoretical topic, discussing generalized methods to let researchers buy insurance for side effects of their research is a great theoretical topic as well.
Given that Lipitch didn’t talk directly about the gain of function research but tried to talk on a higher level to speak about more generalized solutions at EA Global Boston in 2017 he might have also felt social pressure to talk about the issue in a more theoretical manner then in a more applied manner where he told people about the risks of gain of function research.
If we would have instead said on stage at EA Global Boston in 2017 “I believe that the risk of gain of function research is between 0.05% and 0.6% per fulltime researcher” this would have been awkward and create conflict that’s uncomfortable. Talking about it in a more theoretical manner on the other hand allow a listener just to say “He Lipitch seems like a really smart guy”.
I don’t want to say that as a critique of Lipitch, given that he actually did the best work. I however do think EA Global having a social structure that gets people to act that way is a systematic flaw.
That’s interesting. That leaves the question of why the FHI mostly stopped caring about it after 2016.
Past that point https://www.fhi.ox.ac.uk/wp-content/uploads/Lewis_et_al-2019-Risk_Analysis.pdf and https://www.fhi.ox.ac.uk/wp-content/uploads/C-Nelson-Engineered-Pathogens.pdf seem to be about gain of function research while completely ignoring the issue of potential lab leaks and only talking about it as an interesting biohazard topic.
My best guess is that it’s like in math where applied researchers are lower status then theoretical researchers and thus everyone wants to be seen as addressing the theoretical issues.
Infohazards are a great theoretical topic, discussing generalized methods to let researchers buy insurance for side effects of their research is a great theoretical topic as well.
Given that Lipitch didn’t talk directly about the gain of function research but tried to talk on a higher level to speak about more generalized solutions at EA Global Boston in 2017 he might have also felt social pressure to talk about the issue in a more theoretical manner then in a more applied manner where he told people about the risks of gain of function research.
If we would have instead said on stage at EA Global Boston in 2017 “I believe that the risk of gain of function research is between 0.05% and 0.6% per fulltime researcher” this would have been awkward and create conflict that’s uncomfortable. Talking about it in a more theoretical manner on the other hand allow a listener just to say “He Lipitch seems like a really smart guy”.
I don’t want to say that as a critique of Lipitch, given that he actually did the best work. I however do think EA Global having a social structure that gets people to act that way is a systematic flaw.
What do you think about that thesis?