You mean an effort that’s in the top 3 priorities for the entire Chinese state? Like up there at the same level with “maintain the survival of the state”, and above stuff like “get Taiwan back” or “avoid unrest over the economy”?
You’re not going to get 80 percent two-year protection against that, period. The measures that RAND describes wouldn’t do it, and I don’t think any other measures would either (short of just not creating the target).
I doubt those measures would work even if it were just a top 3 priority for only the spies. In fact, they left out a whole bunch of standard “spy” techniques.
Also, realistically, those measures would never be adopted by anybody. The isolation requirement alone would make them a non-starter, and so would the supply chain requirements. Notice that I’m not saying those things aren’t needed. I’m saying you won’t get them. Before I escaped the formal security world, I had a lot of bitter experience saying “If you want the level of assurance you claim to want, the only way to get it is X”, and being told “we’re not doing X”. Usually followed by them implementing some deeply inadequate substitute and pretending it was equivalent[1].
By the way, I asked “which SL5”, because n-level models are a dime a dozen, and “SL” is short and generic and likely to get used a lot. I was guessing you meant some extension to ISA/IEC 62443 (which goes up to SL4, but I could imagine people working on an SL5), or maybe the Lloyd’s maturity model. I’m pretty sure I’ve seen other documents that had “SLn” structures, although I can’t name them offhand. Or maybe it was “CL”, or “ML”, or “AL”, or all of the above. People from the more generic security world are going to get confused if you just talk about “SL5” without saying where it comes from.
… and when I start seeing stuff like “Strict limitation of external connections to the completely isolated network”, I tend to think I’m seeing the beginnings of that process…
By top 3 priority, I mean “among the top 3 most prioritized cyber attacks of that year”. Precisely, I’m discussing robustness against OC5 as defined in the RAND report linked above:
OC5 Top-priority operations by the top cyber-capable institutions
Operations roughly less capable than or comparable to 1,000 individuals who have experience and expertise years ahead of the (public) state of the art in a variety of relevant professions (cybersecurity, human intelligence gathering, physical operations, etc.) spending years with a total budget of up to $1 billion on the specific operation, with state-level infrastructure and access developed over decades and access to state resources such as legal cover, interception of communication infrastructure, and more.
This includes the handful of operations most prioritized by the world’s most capable nation-states.
OK, sorry. That’s slightly below “top 3 priorities for the spies”, I think, but I still don’t think it’s reasonable to expect to protect a file that’s in use against it for 2 years.
@jbash What do you think would be a better strategy/more reasonable? Should there be more focus on mitigating risks after potential model theft? Or a much stronger effort to convince key actors to implement unprecedentedly strict security for AI?
You mean an effort that’s in the top 3 priorities for the entire Chinese state? Like up there at the same level with “maintain the survival of the state”, and above stuff like “get Taiwan back” or “avoid unrest over the economy”?
You’re not going to get 80 percent two-year protection against that, period. The measures that RAND describes wouldn’t do it, and I don’t think any other measures would either (short of just not creating the target).
I doubt those measures would work even if it were just a top 3 priority for only the spies. In fact, they left out a whole bunch of standard “spy” techniques.
Also, realistically, those measures would never be adopted by anybody. The isolation requirement alone would make them a non-starter, and so would the supply chain requirements. Notice that I’m not saying those things aren’t needed. I’m saying you won’t get them. Before I escaped the formal security world, I had a lot of bitter experience saying “If you want the level of assurance you claim to want, the only way to get it is X”, and being told “we’re not doing X”. Usually followed by them implementing some deeply inadequate substitute and pretending it was equivalent[1].
By the way, I asked “which SL5”, because n-level models are a dime a dozen, and “SL” is short and generic and likely to get used a lot. I was guessing you meant some extension to ISA/IEC 62443 (which goes up to SL4, but I could imagine people working on an SL5), or maybe the Lloyd’s maturity model. I’m pretty sure I’ve seen other documents that had “SLn” structures, although I can’t name them offhand. Or maybe it was “CL”, or “ML”, or “AL”, or all of the above. People from the more generic security world are going to get confused if you just talk about “SL5” without saying where it comes from.
… and when I start seeing stuff like “Strict limitation of external connections to the completely isolated network”, I tend to think I’m seeing the beginnings of that process…
By top 3 priority, I mean “among the top 3 most prioritized cyber attacks of that year”. Precisely, I’m discussing robustness against OC5 as defined in the RAND report linked above:
Emphasis mine.
OK, sorry. That’s slightly below “top 3 priorities for the spies”, I think, but I still don’t think it’s reasonable to expect to protect a file that’s in use against it for 2 years.
@jbash What do you think would be a better strategy/more reasonable? Should there be more focus on mitigating risks after potential model theft? Or a much stronger effort to convince key actors to implement unprecedentedly strict security for AI?