Hmm, this isn’t really how I model the situation with respect to racing. From my perspective, the question isn’t “security or no security”
Interesting, I guess my model is that the default outcome (in the absence of heroic efforts to the contrary) is indeed “no security for nation state attackers”, which as far as I can tell is currently the default for practically everything that is developed using modern computing systems. Getting to a point where you can protect something like the weights of an AI model from nation state actors would be extraordinarily difficult and an unprecedented achievement in computer security, which is why I don’t expect it to happen (even as many actors would really want it to happen).
My model of cybersecurity is extremely offense-dominated for anything that requires internet access or requires thousands of people to have access (both of which I think are quite likely for deployed weights).
Interesting, I guess my model is that the default outcome (in the absence of heroic efforts to the contrary) is indeed “no security for nation state attackers”, which as far as I can tell is currently the default for practically everything that is developed using modern computing systems. Getting to a point where you can protect something like the weights of an AI model from nation state actors would be extraordinarily difficult and an unprecedented achievement in computer security, which is why I don’t expect it to happen (even as many actors would really want it to happen).
My model of cybersecurity is extremely offense-dominated for anything that requires internet access or requires thousands of people to have access (both of which I think are quite likely for deployed weights).