I think you are being overly optimistic about homomorphic encryption. The uFAI doesn’t need to have absolute control over how the computation happens. Nor does it need to be able to perfectly predict the real-world results of running some computation. It only needs some amount of information leakage. The current best example I can think of is timing attacks on cryptographic protocols. The protocol itself should be secure, but a side-channel causes insecurity. Another example would be the meltdown and spectre vulnerabilities. How do you know your computational system doesn’t have something like that?
The first idea that comes to mind is pausing. Assuming the uFAI has some way to pause computation (which is a very common operation but, an equivalent might be achieved by performing a specific computation in a tight loop). Then the uFAI might be able to create morse code by alternating steps of pause, long computation, and short computation. I imagine that exact thing is naive, but do you really believe the uFAI won’t find any such vulnerability in your system?
In homomorphic encryption, someone living inside the computation can’t affect the outside unless the whole system breaks the cryptographic assumption. It’s not at all analogous to side channel attacks on other cryptographic operations, for which the security of the primitive itself says nothing about the security of the implementation.
Put differently: in the typical security model, the computer doing the encryption is assumed to be a magical computer that doesn’t leak information and keeps the secret key perfectly secret. But the computer that’s actually running the encrypted computation is assumed to be adversarial, so it can be as imperfect as you want and it’s not going to break things. The analogy to a traditional side-channel attack would be leaking information from the encryption process (which you would still have to be careful about here).
For example, it’s impossible to provide a homomorphically encrypted computation the ability to “pause.” (How would you even try to do that?) And the computation you do cannot at all on what the AI is thinking. Of course this guarantees that homomorphically encrypted computations are slow in practice.
Note that since making the OP there have been credible proposals for indistinguishability obfuscation, which would be the more natural thing to use though it’s bound to be even less competitive.
(I think the crypto in the OP is fine, but I no longer endorse it / consider it interesting.)
I think you are being overly optimistic about homomorphic encryption. The uFAI doesn’t need to have absolute control over how the computation happens. Nor does it need to be able to perfectly predict the real-world results of running some computation. It only needs some amount of information leakage. The current best example I can think of is timing attacks on cryptographic protocols. The protocol itself should be secure, but a side-channel causes insecurity. Another example would be the meltdown and spectre vulnerabilities. How do you know your computational system doesn’t have something like that?
The first idea that comes to mind is pausing. Assuming the uFAI has some way to pause computation (which is a very common operation but, an equivalent might be achieved by performing a specific computation in a tight loop). Then the uFAI might be able to create morse code by alternating steps of pause, long computation, and short computation. I imagine that exact thing is naive, but do you really believe the uFAI won’t find any such vulnerability in your system?
In homomorphic encryption, someone living inside the computation can’t affect the outside unless the whole system breaks the cryptographic assumption. It’s not at all analogous to side channel attacks on other cryptographic operations, for which the security of the primitive itself says nothing about the security of the implementation.
Put differently: in the typical security model, the computer doing the encryption is assumed to be a magical computer that doesn’t leak information and keeps the secret key perfectly secret. But the computer that’s actually running the encrypted computation is assumed to be adversarial, so it can be as imperfect as you want and it’s not going to break things. The analogy to a traditional side-channel attack would be leaking information from the encryption process (which you would still have to be careful about here).
For example, it’s impossible to provide a homomorphically encrypted computation the ability to “pause.” (How would you even try to do that?) And the computation you do cannot at all on what the AI is thinking. Of course this guarantees that homomorphically encrypted computations are slow in practice.
Note that since making the OP there have been credible proposals for indistinguishability obfuscation, which would be the more natural thing to use though it’s bound to be even less competitive.
(I think the crypto in the OP is fine, but I no longer endorse it / consider it interesting.)