I thought that the discussion of various fields having different tradeoffs with regard to disclosing vulnerabilities, was particularly interesting:
The framework helps to explain why the disclosure of software vulnerabilities will often be beneficial for security. Patches to software are often easy to create, and can often be made in a matter of weeks. These patches fully resolve the vulnerability. The patch can be easily propagated: for downloaded software, the software is often automatically updated over the internet; for websites, the fix can take effect immediately. In addition, counterfactual possession is likely, because it is normally easier to find a software vulnerability (of which there is a constant supply) than to make a scientific discovery (see [3]). These factors combine to make a reasonable argument in favour of public disclosure of software vulnerabilities, at least after the vendor has been given time to prepare a patch.
Contrasting other fields will further bring into relief the comparatively defence-dominant character of software vulnerability knowledge. We can focus on the tractability of defensive solutions: for certain technologies, there is no low-cost, straightforward, effective defence.
First, consider biological research that provides insight into the manufacture of pathogens, such as a novel virus. A subset of viruses are very difficult to vaccinate for (there is still no vaccination for HIV) or otherwise prepare against. This lowers the defensive benefit of publication, by blocking a main causal pathway by which publication leads to greater protection. This contrasts with the case where an effective treatment can be developed within a reasonable time period, which could weigh in favour of publication [15].
Second, consider cases of hardware based vulnerabilities, such as with kinetic attacks or physical key security. Advances in drone hardware have enabled the disruption of airports and attacks on infrastructure such as oil facilities; these attacks presently lack a cheap, effective solution [18]. This arises in part from the large attack surface of physical infrastructure: the drone’s destination can be one of many possible points on the facility, and it can arrive there via a multitude of different trajectories. This means that the path of the drone cannot simply be blocked.
Moreover, in 2003 a researcher published details about a vulnerability in physical key systems [2]. Apartment buildings, offices, hotels and other large buildings often use systems where a single master-key can open all doors. The research showed how to derive the master-key from a single non-master key. The researcher wrote that there was “no simple or completely effective countermeasure that prevents exploitation of this vulnerability short of replacing a master keyed system with a non-mastered one” ([1]; see [2] for further discussion of counter-measures). The replacement of masterkey systems is a costly solution insofar as master-key systems are useful, and changes are very difficult to propagate: physical key systems distributed across the world would need to be manually updated
Finally, consider the policy question of whether one should have published nuclear engineering research, such as on uranium enrichment, in the 1960s. For countries like India and Pakistan, this would have increased, not decreased, their potential to destroy each others’ cities, due to the lack of defensive solutions: as with certain diseases, nuclear bombs cannot be adequately protected against. Moreover, for the minor protections against nuclear bombs that exist, these can be pursued without intricate knowledge as to how nuclear bombs are manufactured: there is low transferability of offensive into defensive knowledge. For example, a blueprint for the design of a centrifuge does not help one build a better defensive bunker. Overall, if both a potential defender and potential attacker are given knowledge that helps them build nuclear weapons, that knowledge is more useful for making an attack than protecting against an attack: the knowledge is offense-biased.
Differences across fields will shape the security value of publication, which can influence disclosure norms among security-minded scientists and policymakers. The Manhattan Project was more secretive than locksmiths and influenza researchers, who are in turn often more secretive than those finding vulnerabilities in software. Indeed, there was a culture clash between the researcher who published the flaw in the master-key system, above, who came from a computer security background, and the locksmiths who accused him of being irresponsible. The different disclosure cultures exist in the form of default practices, but also in common refrains—for example, language about the virtues of “studying”a problem, or the value of users being empowered by disclosure to “make decisions for themselves”. Such language embeds implicit answers to the framework given in this section, and therefore caution should be exercised when importing concepts and language from other fields.
I thought that the discussion of various fields having different tradeoffs with regard to disclosing vulnerabilities, was particularly interesting: