If you add adhoc patches until you can’t imagine any way for it to go wrong, you get a system that is too complex to imagine. This is the “I can’t figure out how this fails” scenario. It is going to fail for reasons that you didn’t imagine.
If you understand why it can’t fail, for deep fundamental reasons, then its likely to work.
This is the difference between the security mindset and ordinary paranoia. The difference between adding complications until you can’t figure out how to break the code, and proving that breaking the code is impossible (assuming the adversary can’t get your one time pad, its only used once, your randomness is really random, your adversary doesn’t have anthropic superpowers ect).
I would think that the chance of serious failure in the first scenario was >99%, and in the second, (assuming your doing it well and the assumptions you rely on are things you have good reason to believe) <1%
If you add adhoc patches until you can’t imagine any way for it to go wrong, you get a system that is too complex to imagine. This is the “I can’t figure out how this fails” scenario. It is going to fail for reasons that you didn’t imagine.
If you understand why it can’t fail, for deep fundamental reasons, then its likely to work.
This is the difference between the security mindset and ordinary paranoia. The difference between adding complications until you can’t figure out how to break the code, and proving that breaking the code is impossible (assuming the adversary can’t get your one time pad, its only used once, your randomness is really random, your adversary doesn’t have anthropic superpowers ect).
I would think that the chance of serious failure in the first scenario was >99%, and in the second, (assuming your doing it well and the assumptions you rely on are things you have good reason to believe) <1%