I agree that these are good areas to deploy AI but I don’t see these being fairly easy to implement or result in a radical reduction in security risk on their own. Mainly because a lot of security is non-technical, and security involves tightening up a lot of little things that take time and effort.
AI could give us a huge leg up in monitoring—because as you point out, it’s labour-intensive, and some false positives are OK. But it’s a huge investment to get all of the right logs and continuously deepen your visibility. For example, many organisations do not monitor DNS traffic due to the high volume of logs generated. On-host monitoring tools make lots of tradeoffs about what events they try to capture without hosing the system or your database capacity—do you note every file access? None of these mean monitoring is ineffective, but if you don’t have a strong foundation then AI won’t help. And operating systems have so many ways for you to do things—if you know bash is monitored, you can take actions using ipython.
I think ‘trust displacement’ could be particularly powerful to remove direct access privileges from users. Secure and Reliable Systems talks about the use of a tool proxy to define higher level actions that users need to take, so they don’t need low level access. In practice this is cumbersome to define up front and relies on engineers with a lot of experience in the system, so you only end up doing it for especially sensitive or dangerous actions. Having an AI to build these for you, or do things on your behalf would reduce the cost of this control.
But in my experience a key challenge with permission management is that to do it well you can’t just give people the minimal set of privileges to do their job—you have to figure out how they could do their job with less privileges. This is extremely powerful, but it’s far from easy. People don’t like to change the way they do their work, especially if it adds steps. Logical appeals using threat models only go so far when people’s system 1 is not calibrated with security in mind—they just won’t feel like it’s worth it.
For these reasons good access management effectively takes cultural change, which is usually slow, and AI alone can’t solve that. Especially not at labs going as fast as they can, with employees threatening to join your competitor if you add friction or “security theatre” they don’t understand. One way this could go better than I expect is if it’s easier, faster or more reliable to have AI do the action for you, ie. if users have incentives to change their workflows to be more secure.
I agree that these are good areas to deploy AI but I don’t see these being fairly easy to implement or result in a radical reduction in security risk on their own. Mainly because a lot of security is non-technical, and security involves tightening up a lot of little things that take time and effort.
AI could give us a huge leg up in monitoring—because as you point out, it’s labour-intensive, and some false positives are OK. But it’s a huge investment to get all of the right logs and continuously deepen your visibility. For example, many organisations do not monitor DNS traffic due to the high volume of logs generated. On-host monitoring tools make lots of tradeoffs about what events they try to capture without hosing the system or your database capacity—do you note every file access? None of these mean monitoring is ineffective, but if you don’t have a strong foundation then AI won’t help. And operating systems have so many ways for you to do things—if you know bash is monitored, you can take actions using ipython.
I think ‘trust displacement’ could be particularly powerful to remove direct access privileges from users. Secure and Reliable Systems talks about the use of a tool proxy to define higher level actions that users need to take, so they don’t need low level access. In practice this is cumbersome to define up front and relies on engineers with a lot of experience in the system, so you only end up doing it for especially sensitive or dangerous actions. Having an AI to build these for you, or do things on your behalf would reduce the cost of this control.
But in my experience a key challenge with permission management is that to do it well you can’t just give people the minimal set of privileges to do their job—you have to figure out how they could do their job with less privileges. This is extremely powerful, but it’s far from easy. People don’t like to change the way they do their work, especially if it adds steps. Logical appeals using threat models only go so far when people’s system 1 is not calibrated with security in mind—they just won’t feel like it’s worth it.
For these reasons good access management effectively takes cultural change, which is usually slow, and AI alone can’t solve that. Especially not at labs going as fast as they can, with employees threatening to join your competitor if you add friction or “security theatre” they don’t understand. One way this could go better than I expect is if it’s easier, faster or more reliable to have AI do the action for you, ie. if users have incentives to change their workflows to be more secure.