This plan seems to underemphasize security. I expect that for 10x AI R&D[1], you strongly want fully state proof security (SL5) against weight exfiltration and then quickly after that you want this level of security for algorithmic secrets and unauthorized internal usage[2].
Things don’t seem to be on track for this level of security so I expect a huge scramble to achieve this.
10x AI R&D could refer to “software progress in AI is now 10x faster” or “the labor input to software progress is now 10x faster”. If it’s the second one, it is plausible that fully state proof security isn’t the most important thing if AI progress is mostly bottlenecked by other factors. However, 10x labor acceleration is pretty crazy and I think you want SL5.
This plan seems to underemphasize security. I expect that for 10x AI R&D[1], you strongly want fully state proof security (SL5) against weight exfiltration and then quickly after that you want this level of security for algorithmic secrets and unauthorized internal usage[2].
Things don’t seem to be on track for this level of security so I expect a huge scramble to achieve this.
10x AI R&D could refer to “software progress in AI is now 10x faster” or “the labor input to software progress is now 10x faster”. If it’s the second one, it is plausible that fully state proof security isn’t the most important thing if AI progress is mostly bottlenecked by other factors. However, 10x labor acceleration is pretty crazy and I think you want SL5.
Unauthorized internal usage includes stuff like foreign adversaries doing their weapons R&D on your cluster using your model weights.