And what are the most compelling threat models that don’t have good public write-ups?
Depends how you slice and dice the space (and what counts as a “threat model”), I don’t have a good answer for this. In general I feel like a threat model is more like something that everyone can make for themselves and is a model of the space of threats, not like a short list of things that you might discover.
We could talk about particular threats that don’t have good public write-ups. I feel like there are various humans-are-fragile-so-weak-AI-takes-over-when-world-falls-apart possibilities and those haven’t been written up very well.
I think Neel is using this in the sense I use the phrase, where you carve up the space of threats in some way, and then a “threat model” is one of the pieces that you carved up, rather than the way in which you carved it up.
This is meant to be similar to how in security there are many possible kinds of risks you might be worried about, but then you choose a particular set of capabilities that an attacker could have and call that a “threat model”—this probably doesn’t capture every setting you care about, but does capture one particular piece of it.
(Though maybe in security the hope is to choose a threat model that actually contains all the threats you expect in reality, so perhaps this analogy isn’t the best.)
(I think “that’s a thing that people make for themselves” is also a reasonable response for this meaning of “threat model”.)
On that perspective I guess by default I’d think of a threat as something like “This particular team of hackers with this particular motive” and a threat model as something like “Maybe they have one or two zero days, their goal is DoS or exfiltrating information, they may have an internal collaborator but not one with admin privileges...” And then the number of possible threat models is vast even compared to the vast space of threats.
Depends how you slice and dice the space (and what counts as a “threat model”), I don’t have a good answer for this. In general I feel like a threat model is more like something that everyone can make for themselves and is a model of the space of threats, not like a short list of things that you might discover.
We could talk about particular threats that don’t have good public write-ups. I feel like there are various humans-are-fragile-so-weak-AI-takes-over-when-world-falls-apart possibilities and those haven’t been written up very well.
I think Neel is using this in the sense I use the phrase, where you carve up the space of threats in some way, and then a “threat model” is one of the pieces that you carved up, rather than the way in which you carved it up.
This is meant to be similar to how in security there are many possible kinds of risks you might be worried about, but then you choose a particular set of capabilities that an attacker could have and call that a “threat model”—this probably doesn’t capture every setting you care about, but does capture one particular piece of it.
(Though maybe in security the hope is to choose a threat model that actually contains all the threats you expect in reality, so perhaps this analogy isn’t the best.)
(I think “that’s a thing that people make for themselves” is also a reasonable response for this meaning of “threat model”.)
On that perspective I guess by default I’d think of a threat as something like “This particular team of hackers with this particular motive” and a threat model as something like “Maybe they have one or two zero days, their goal is DoS or exfiltrating information, they may have an internal collaborator but not one with admin privileges...” And then the number of possible threat models is vast even compared to the vast space of threats.