You assume PGP encryption is secure. I believe that—at a very basic level—PGP encryption is mathematically similar to the problem of factorising large prime numbers, which is currently computationally “difficult”. If an AI spent time working on number theory (given that it has access to all of our world as an input, it would certainly be up to date with our most advanced techniques) there’s a danger it would simply be able to prove the Riemann Hypothesis and enter the world having already learned to quickly decrypt all of our communications.
This doesn’t affect this as a thought experiment, but please don’t try to implement this as stated.
Actually homomorphic encryption is currently based on a problem about ideal lattices which is very different than factoring. The same complaint applies—we don’t really know if the problem is hard, just that we haven’t been able to solve it.
The cryptographic operation I am describing is sufficiently limited (since you control the code of the adversary) that it is plausible that we will develop unconditional proof techniques, however, long before proving P != NP. I think it would be interesting to develop such techniques, and quarantining code may turn out to be useful.
No. PGP is not the same as homomorphic encryption. Homomorphic encryption doesn’t depend on factoring in any way. Note also, that proving the Riemann hypothesis doesn’t magically give you any way to factor numbers quickly. You may be confusing this with P=NP. If such an AI had an algorithm that efficiently solved some NP complete problem then there would be a possible danger. But that’s a very different claim.
It might help for you to read up a bit on theoretical computer science since it sounds like you’ve adopted certain misconceptions that one might get at if one has simply read popularizations with minimal math content.
Thing which leaps out at me:
You assume PGP encryption is secure. I believe that—at a very basic level—PGP encryption is mathematically similar to the problem of factorising large prime numbers, which is currently computationally “difficult”. If an AI spent time working on number theory (given that it has access to all of our world as an input, it would certainly be up to date with our most advanced techniques) there’s a danger it would simply be able to prove the Riemann Hypothesis and enter the world having already learned to quickly decrypt all of our communications.
This doesn’t affect this as a thought experiment, but please don’t try to implement this as stated.
Actually homomorphic encryption is currently based on a problem about ideal lattices which is very different than factoring. The same complaint applies—we don’t really know if the problem is hard, just that we haven’t been able to solve it.
The cryptographic operation I am describing is sufficiently limited (since you control the code of the adversary) that it is plausible that we will develop unconditional proof techniques, however, long before proving P != NP. I think it would be interesting to develop such techniques, and quarantining code may turn out to be useful.
The posted article makes no mention of PGP encryption.
I conjecture that AstroCJ (1) meant “RSA” and (2) has some misconceptions about the role of the Riemann hypothesis in number theory.
No. PGP is not the same as homomorphic encryption. Homomorphic encryption doesn’t depend on factoring in any way. Note also, that proving the Riemann hypothesis doesn’t magically give you any way to factor numbers quickly. You may be confusing this with P=NP. If such an AI had an algorithm that efficiently solved some NP complete problem then there would be a possible danger. But that’s a very different claim.
It might help for you to read up a bit on theoretical computer science since it sounds like you’ve adopted certain misconceptions that one might get at if one has simply read popularizations with minimal math content.