In the field of security engineering, a persistent flat-earth belief is ‘security by obscurity’: the doctrine that security measures should not be disclosed or even discussed.
In the seventeenth century, when Bishop Wilkins wrote the first book on cryptography in English in 1641, he felt the need to justify himself: “If all those useful Inventions that are liable to abuse, should therefore be concealed, there is not any Art or Science which might be lawfully profest”. In the nineteenth century, locksmiths objected to the publication of books on their craft; although villains already knew which locks were easy to pick, the locksmiths’ customers mostly didn’t. In the 1970s, the NSA tried to block academic research in cryptography; in the 1990s, big software firms tried to claim that proprietary software is more secure than its open-source competitors.
Yet we actually have some hard science on this. In the standard reliability growth model, it is a theorem that opening up a system helps attackers and defenders equally; there’s an empirical question whether the assumptions of this model apply to a given system, and if they don’t then there’s a further empirical question of whether open or closed is better.
Indeed, in systems software the evidence supports the view that open is better. Yet the security-industrial complex continues to use the obscurity argument to prevent scrutiny of the systems it sells. Governments are even worse: many of them would still prefer that risk management be a matter of doctrine rather than of science.”
There are cases where data or ideas can be really hazardous. I don’t count “but it might hurt somebody’s precious feelings” as one of those cases.
I just came across this:
This seems to be neither here nor there as regards the present debate.
I assign some probability to security in obscurity working for bio, some to it not working.