In homomorphic encryption, someone living inside the computation can’t affect the outside unless the whole system breaks the cryptographic assumption. It’s not at all analogous to side channel attacks on other cryptographic operations, for which the security of the primitive itself says nothing about the security of the implementation.
Put differently: in the typical security model, the computer doing the encryption is assumed to be a magical computer that doesn’t leak information and keeps the secret key perfectly secret. But the computer that’s actually running the encrypted computation is assumed to be adversarial, so it can be as imperfect as you want and it’s not going to break things. The analogy to a traditional side-channel attack would be leaking information from the encryption process (which you would still have to be careful about here).
For example, it’s impossible to provide a homomorphically encrypted computation the ability to “pause.” (How would you even try to do that?) And the computation you do cannot at all on what the AI is thinking. Of course this guarantees that homomorphically encrypted computations are slow in practice.
Note that since making the OP there have been credible proposals for indistinguishability obfuscation, which would be the more natural thing to use though it’s bound to be even less competitive.
(I think the crypto in the OP is fine, but I no longer endorse it / consider it interesting.)
In homomorphic encryption, someone living inside the computation can’t affect the outside unless the whole system breaks the cryptographic assumption. It’s not at all analogous to side channel attacks on other cryptographic operations, for which the security of the primitive itself says nothing about the security of the implementation.
Put differently: in the typical security model, the computer doing the encryption is assumed to be a magical computer that doesn’t leak information and keeps the secret key perfectly secret. But the computer that’s actually running the encrypted computation is assumed to be adversarial, so it can be as imperfect as you want and it’s not going to break things. The analogy to a traditional side-channel attack would be leaking information from the encryption process (which you would still have to be careful about here).
For example, it’s impossible to provide a homomorphically encrypted computation the ability to “pause.” (How would you even try to do that?) And the computation you do cannot at all on what the AI is thinking. Of course this guarantees that homomorphically encrypted computations are slow in practice.
Note that since making the OP there have been credible proposals for indistinguishability obfuscation, which would be the more natural thing to use though it’s bound to be even less competitive.
(I think the crypto in the OP is fine, but I no longer endorse it / consider it interesting.)