I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the lab leak hypothesis.
In general the paper only addresses the meta level of how to generally think about risks. What would have been required is to actually think about how high the risk is and communicate that it’s serious enough that other people should pay attention. The paper could have cited Marc Lipsitch’s risk assement in the introduction to frame the issue but instead talked about it in a more abstract way that doesn’t get the reader to think that the issue is worth paying attention.
It seems to falsely propogate the idea that the risk was very low by saying “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic” when the risk estimate that Marc Lipsitch made wasn’t in an order that anyone should consider low.
The paper seems like there was an opportunity to say something general on risk management and FHI used that to express their general ideas of risk management while failing at actually looking at the risk in question.
Just imagine someone saying about AI risk “Even a very low chance of AI killing all humans in unacceptable. We should get AI researchers and AI companies to buy insurance against the harm created by AI risk”. The paper isn’t any different then that.
I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the lab leak hypothesis.
In general the paper only addresses the meta level of how to generally think about risks. What would have been required is to actually think about how high the risk is and communicate that it’s serious enough that other people should pay attention. The paper could have cited Marc Lipsitch’s risk assement in the introduction to frame the issue but instead talked about it in a more abstract way that doesn’t get the reader to think that the issue is worth paying attention.
It seems to falsely propogate the idea that the risk was very low by saying “However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic” when the risk estimate that Marc Lipsitch made wasn’t in an order that anyone should consider low.
The paper seems like there was an opportunity to say something general on risk management and FHI used that to express their general ideas of risk management while failing at actually looking at the risk in question.
Just imagine someone saying about AI risk “Even a very low chance of AI killing all humans in unacceptable. We should get AI researchers and AI companies to buy insurance against the harm created by AI risk”. The paper isn’t any different then that.