In a world where not all employees at any given AI research lab believe that AI presents a large and present danger, I think insider threats are a huge factor to consider.
Yes, there’s a lot of disagreement about policies regarding model opensourcing especially. It seems likely to me that some employees at large AI labs (Google Brain Deepmind, Meta, Microsoft Research, etc.) will always disagree with the overall policy of their organisation. This creates a higher base rate risk of insider threats.
In a world where not all employees at any given AI research lab believe that AI presents a large and present danger, I think insider threats are a huge factor to consider.
Yes, there’s a lot of disagreement about policies regarding model opensourcing especially. It seems likely to me that some employees at large AI labs (Google Brain Deepmind, Meta, Microsoft Research, etc.) will always disagree with the overall policy of their organisation. This creates a higher base rate risk of insider threats.