Security Mindset is basically: you can think like a hacker, of exploits, how to abuse rules etc. So you can defend against hacks & exploits. You don’t stop at basic “looks safe to me!”
Mm, that’s not exactly how I’d summarize it. That seems more like ordinary paranoia:
Lots of programmers have the ability to imagine adversaries trying to threaten them. They imagine how likely it is that the adversaries are able to attack them a particular way, and then they try to block off the adversaries from threatening that way. Imagining attacks, including weird or clever attacks, and parrying them with measures you imagine will stop the attack; that is ordinary paranoia.
My understanding is that Security Mindset-style thinking doesn’t actually rest on your ability to invent a workable plan of attack. Instead, it’s more like imagining that there exists a method for unstoppably breaking some (randomly-chosen) element of your security, and then figuring out how to make your system secure despite that. Or… that it’s something like the opposite of fence-post security, where you’re trying to make sure that for your system to be broken, several conditionally independent things need to go wrong or be wrong.
Mm, that’s not exactly how I’d summarize it. That seems more like ordinary paranoia:
My understanding is that Security Mindset-style thinking doesn’t actually rest on your ability to invent a workable plan of attack. Instead, it’s more like imagining that there exists a method for unstoppably breaking some (randomly-chosen) element of your security, and then figuring out how to make your system secure despite that. Or… that it’s something like the opposite of fence-post security, where you’re trying to make sure that for your system to be broken, several conditionally independent things need to go wrong or be wrong.
Ok, thanks for the correction! My definition was wrong but the argument still stands that it should be teachable, or at least testable.