The Perils of the Security Mindset taken too far
Epistemic status: A few initial thoughts.
For your project to be secure no one should know of your project’s existence. Or better than that your project should show all the outwards signs of one project but be another.
To be secure, your inner workings should be opaque, so you become less predictable. Therefore people trust you less. In Newcomb’s problem one of the common strategies people come up with to trick omega is to use quantum sources of information to become less predictable. The common counter to this is that Newcomb only fills both boxes in the case it can predict you.
If you are opaque, it is not known who you associate with. Even if people trust you, they might not trust that you would not associate with people they do not trust.
If you are opaque, your security model is not known. Even if people trust you not to leak things on purpose, they might not trust you to do so accidentally.
You trust people less than is optimal. There are false negatives to your decisions to trust people.
You degrade the rationality of other people. There are at least two things that mean people aren’t as rational as they could be, a lack of brain power and a lack of information. Hiding information means people can be a lot less rational or effective. This cost is borne by everyone else but you, so you might not be accounting for it.
Hiding your strategic views, hides your flaws. No one can know if you are being too paranoid, because you hide your threat model.
Brain power is hoovered up trying to model other people modelling you, to make sure that you don’t tip your hand.
If you can possibly avoid taking the security mindset this far, do so. Do all you can in the freedom of openness. Secrecy can also be a mindkiller.
Secrecy != security. You’re far more secure by being transparent, open, and immune from attack. The best way for your project to be secure is to have a solid business idea and excellent implementors, with no secrecy at all—tell everyone and recruit the best to your side.
The best way to “beat” Omega is to be so wealthy that you only care about the box contents for the game’s amusement potential.
I think this post is based on a misunderstanding of “security mindset”. When I’ve heard it used, it’s usally assuming lack of secrecy and finding provable defenses against large classes of attack.
You always need a modicum of secrecy to be secure (private keys, passwords etc). Secrecy can also help security a lot. For example whoever is Satoshi Nakamoto helped their security a lot by using a pseudonym and covering their traces pretty well (if they are an individual), so they don’t have to worry about being kidnapped and forced to hand over their bitcoin.
Security often thinks about secrecy when they are are wanting attempted attacks to be visible (because you can’t protect against zero days). For example you might not want the rules of your web application firewall to be known, so that people can’t set up duplicate infrastructure and quietly probe that for holes in it.
Secrecy becomes worse when you start worrying about securing yourself from insider threats etc....
The security mindset is
If you want to really use it, you cannot stop at the computer, for adversaries and attackers do not. You have to look at personnel, their backgrounds. Can you trust Ywith X information (be it encryption keys or source code), for whatever you are trying to do?
How do you know that Satoshi Nakamoto is secure? For all we know there’s good chance that he’s dead.
It’s not that easy for the NSA to let someone who’s a famous hacker disappear but on the other hand there’s no pushback when they kidnap someone like Satoshi Nakamoto.
To me the post feels very speculative and is far removed from the practical concerns that come with the secrecy mindset.
Planning on running a big project without anyone knowing is a bad plan. There’s a reason of why open source code is valued in the security community.
Like I said most of the examples were of taking the mindset too far. Sometimes it is appropriate to go as far as I described. It depends on the stakes and the actors at play. For example to protect the secret that enigma had been broken the allies did the following.
For your second point, this post wasn’t an argument a random person should try and run projects secretly. Just that if you were trying to hide the progress or contents of your project from highly motivated state level actors, only partially hiding your project seems dumb.
That sounds to me like you don’t know anyone with a security mindset that has an interest in hiding certain parts of projects from state level actors and this is a completely theoretical exercise for you.