One, do you think a “a homomorphic encrypted version of their best AI” is a viable thing? As far as I know homomorphic-encrypted software is very very very slow. By the time a homomorphic-encrypted version completes its AI-level tests, it might well be obsolete.
Second, nuclear inspection regimes and such have the goal of veryfing the cap on capabilities. Usually you are not allowed to have more than X missiles or Y kg of enriched uranium. But that’s not the information which Yao’s problem provides. Imagine that during the Cold War all the US and the USSR could know was whether one side’s nuclear arsenal was better than the other side’s. That doesn’t sound stabilizing at all to me.
Two comments.
One, do you think a “a homomorphic encrypted version of their best AI” is a viable thing? As far as I know homomorphic-encrypted software is very very very slow. By the time a homomorphic-encrypted version completes its AI-level tests, it might well be obsolete.
Second, nuclear inspection regimes and such have the goal of veryfing the cap on capabilities. Usually you are not allowed to have more than X missiles or Y kg of enriched uranium. But that’s not the information which Yao’s problem provides. Imagine that during the Cold War all the US and the USSR could know was whether one side’s nuclear arsenal was better than the other side’s. That doesn’t sound stabilizing at all to me.
Yes. See the reference. Even a 10 or 100x computation cost increase would be acceptable for top-level national security purposes like this.
That sounds very stabilizing to me. ‘We must prevent a missile gap!’
Which reference? I’m not talking about the millionaires’ problem, I’m talking about executing homomorphic code.
One side thinks this and so accelerates the arms race. The other side thinks “This is our chance! We must strike while we know we’re ahead!” :-/