If you’re interested in the first principles/psychological dynamics that this phenomenon is downstream of is Social Status, well described in Simler’s Social Status: Down the Rabbit Hole, and tendency towards excessive self-importance (and the equilibria where we excessively anticipate that excessive tendency in others), described in Carnegie and Gull’s post Win friends and influence people: the bombshell.
My model of the US Natsec community’s secret activities indicates that they would engage in ambitious suppression of information about lab leak hypothesis, regardless of whether it was true, or their imagining of the odds that it was true or false. This is because:
1) such information, if public, would damage US-China relations, regardless of whether or not it was true. Failing to suppress this information incentivises information operations by third parties who benefit from destabilizing US-China diplomacy, potentially including Russians, North Koreans, or war hawks within the Natsec comminity itself who profit off of worsening US-China relations.
2) Compartmentalization and funding turf wars within the Natsec community can cause information asymmetry and lack of trust between agencies and cliques, potentially including distrust between compartments within the same agency. This means different powerful people could have all kinds of assumptions about the odds that lab leak hypothesis is true. It only takes one person to assume the worst and engage in risky information operations to “mitigate the harm”, including utilizing personal connections to a wide variety of public-facing organizations (described in chapter 4 of Joseph Nye’s Soft Power, although those systems are largely obsolete due to the emergence of ML-based influence systems).
Strong evidence against this theory and predictions is the actual statements by intel orgs, which were notably less skeptical than other sources of lab origin.
If you’re interested in the first principles/psychological dynamics that this phenomenon is downstream of is Social Status, well described in Simler’s Social Status: Down the Rabbit Hole, and tendency towards excessive self-importance (and the equilibria where we excessively anticipate that excessive tendency in others), described in Carnegie and Gull’s post Win friends and influence people: the bombshell.
My model of the US Natsec community’s secret activities indicates that they would engage in ambitious suppression of information about lab leak hypothesis, regardless of whether it was true, or their imagining of the odds that it was true or false. This is because:
1) such information, if public, would damage US-China relations, regardless of whether or not it was true. Failing to suppress this information incentivises information operations by third parties who benefit from destabilizing US-China diplomacy, potentially including Russians, North Koreans, or war hawks within the Natsec comminity itself who profit off of worsening US-China relations.
2) Compartmentalization and funding turf wars within the Natsec community can cause information asymmetry and lack of trust between agencies and cliques, potentially including distrust between compartments within the same agency. This means different powerful people could have all kinds of assumptions about the odds that lab leak hypothesis is true. It only takes one person to assume the worst and engage in risky information operations to “mitigate the harm”, including utilizing personal connections to a wide variety of public-facing organizations (described in chapter 4 of Joseph Nye’s Soft Power, although those systems are largely obsolete due to the emergence of ML-based influence systems).
3) There is also a strong human tendency to be optimistic about their ability to get away with ambitious (and expensive) operations, not pessimistic.
Strong evidence against this theory and predictions is the actual statements by intel orgs, which were notably less skeptical than other sources of lab origin.