Defending the security mindset here! Without having even read the rest of the text yet...
It’s not (necessarily) about worst-case thinking. It’s more about making sure that you assign realistic probabilities to cases, taking into account the knowledge that intelligent agent may turn up and intentionally mess with things to make otherwise low probabilities higher.
One application of that is that you have to be exhaustive in enumerating all the possible cases, and can’t neglect any just because they seem weird or haven’t happened in the past. That does indeed often turn into identifying inobvious cases that may indeed be worst or approximately worst. But you are not required to assume the worst case in security, only to compensate for distortions of your determination of how likely that case actually is.
On edit: The next phase, of course, is to compensate for your likely failure to have identified all the cases to begin with...
Defending the security mindset here! Without having even read the rest of the text yet...
It’s not (necessarily) about worst-case thinking. It’s more about making sure that you assign realistic probabilities to cases, taking into account the knowledge that intelligent agent may turn up and intentionally mess with things to make otherwise low probabilities higher.
One application of that is that you have to be exhaustive in enumerating all the possible cases, and can’t neglect any just because they seem weird or haven’t happened in the past. That does indeed often turn into identifying inobvious cases that may indeed be worst or approximately worst. But you are not required to assume the worst case in security, only to compensate for distortions of your determination of how likely that case actually is.
On edit: The next phase, of course, is to compensate for your likely failure to have identified all the cases to begin with...