You don’t quite understand homomorphic encryption. Property (b) is actually guaranteed in a much stronger sense than you recognize. In fact it is impossible for the AI to hack the computer in any way, or signal in any way to someone watching the program execution. This is an important issue in cryptography, but more complicated than a blog post can apparently communicate. The point is, absolutely NO information is leaked to someone without the secret key. You can’t even tell if the AI just gave up and is sitting there.
I agree that people ``just being curious″ is an issue (basically, the AI getting out of the box without even using a text channel). Honestly, once you have designed an uFAI you already have this problem if people just decide to go and run it on their own. However, I believe that in the future we will be able to resolve this problem in a limited sense by destroying the original private key and leaving a gimped private key which can only be used to decrypt a legitimate output.
I definitely DON’T understand it. However, I’m not sure you’re making it clear—is this something you work with, or just have read about? It’s obviously complicated to explain in a blog (though Schnier http://www.schneier.com/blog/archives/2009/07/homomorphic_enc.html does a good start) but it surely ought to be possible to make a start on explaining what’s possible and what’s not!
It’s clearly possible for Alice to have some data, Eve to have a large computer, and Alice to want to run a well-known algorithm on her data. It’s possible to translate the algorithm into a series of addition and multiplication of data, and thus have Eve run the algorithm on the encrypted data using her big computer, and send the result back to Alice. Eve never knows what the content is, and Alice gets the answer back (assuming she has a way of checking it).
However, if Alice has a secret algorithm and secret data, and wants Eve to run one on the other, is it possible? Is this analogous to our case?
Is the answer “in principle, yes, since applying an algorithm to data is itself an algorithm, so can be applied to an encryoted algorithm with encryopted data without ever knowing what the algortihm is”? But is it much less practical to do so, because each step involves evaluating much redundant data, or not?
I’m imagining, say, using the game of life to express an algorithm (or an AI). You can iterate a game of life set-up by combining each set of adjacent cells in a straight-forward way. However, you can’t use any of the normal optimisations on it, because you can’t easily tell what data is unchanged? Is any of this relevant? Is it better if you build dedicated hardware?
If my understanding is correct (that you do run with the program and data encrypted at all times) then I agree the the scheme does in theory work: the AI has no way of affecting its environment in a systematic way if it can’t predict how its data appears non-encrypted.
“I believe that in the future we will be able to resolve this problem in a limited sense by destroying the original private key and leaving a gimped private key which can only be used to decrypt a legitimate output.”
I mentioned this possibility, but had no idea if it would be plausible. Is this an inherent part of the cryptosystem, or something you hope it would be possible to design a cryptosystem to do? Do you have any idea if it IS possible?
However, I’m not sure you’re making it clear—is this something you work with, or just have read about?
To answer your question, since Paul apparently failed to—Paul reads crypto papers and spends non-trivial amounts of time thinking about how to design cryptographic protocols to attain various guarantees. I’m not sure if that constitutes ‘working with’ or not, but it presumably answers your question either way.
I’m sure I’m not making it clear. I may make a discussion post describing in much more detail exactly what is and is not possible in theoretical computational cryptography, because it is a really beautiful subject that might be independently interesting to many readers.
What you say in the last post seems precisely correct. You would use something much more efficient than the game of life. This makes any program run much, much slower. I have limited experience with computer architecture, but I would guess that a single operation on a modern processor would require about 300 homomorphic operations, many of which could be done in parallel. So optimistically you could hope that a program would run only a couple billion times slower using modern cryptography with dedicated hardware (it does parallelize really well though, for what its worth).
I think if it was concluded that quarantining code was useful, smart people could solve this problem with much, much better factors. I would not be surprised if you could make the code run say 100x slower but be provably secure in this sense.
The gimped key I mentioned is not a part of the cryptosystem. Again, I have high confidence that it could be worked out if smart people decided this problem was interesting and worked on it.
But if you’re running something vaguely related to a normal program, if the program wants to access memory location X, but you’re not supposed to know which memory location is accessed, doesn’t that mean you have to evaluate the memory write instruction in combination with every memory location for every instruction (whether or not that instruction is a memory write)?
So if your memory is—say -- 500 MB, that means the evaluation is at least 500,000,000 times slower? I agree there are probably some optimisations, but I’m leery of being able to run a traditional linear program at all. That’s why I suggested something massively parallel like game of life (although I agree that’s not actually designed for computation) -- if you have to evaluate all locations anyway, they might as well all do something useful. For instance, if your hypothetical AI was a simulation of a neural netowrk, that would be perfect, if you’re allowed to know the connections as part of the source code, you can evaluate each simulation neuron in combination with only the ones its connected with, which is barely less efficient than evaluating them linearly, since you have to evaluate all of them anyway.
I think that’s what my brain choked on when I imagined it the first time—yes, technically, 500,000,000 times slower is “possible in principle, but much slower”, but I didn’t realise that’s what you meant, hence my assumption that parts of the algorithm would not be encrypted. think if you rewrite this post to make the central point more accessible (which would make it a very interesting post), it would not be hard to put in this much explanation about how the encryption would actually be used.
Unless I’m wrong that that is necessary? It wouldn’t be hard, just something like “you can even run a whole program which is encrypted, even if you have to simulate every memory location. That’s obviously insanely impractical, but if we were to do this for real we could hopefully find a way of doing it that doesn’t require that.” I think that would be 10,000 times clearer than just repeatedly insisting “it’s theoretically possible” :)
Running a C program with 500MB of memory would undoubtedly be very slow. Current technologies would be even slower than you suspect for other reasons.
However, there are other languages than C , and for some this transformation is much, much more efficient. The game of life is at least parallel, but is also extremely inefficient. You could instead use some fundamentally parallel language which has all of the benefits of the game of life while actually being structured to do computation. Coincidentally, many AI researchers already do work in functional languages like Lisp, which are very well-suited to this transformation. Running a huge List program homomorphically might be practical if you had constant rate homomorphic encryption, which I suspect we will eventually. The slowdown could be something like 10x as slow, running on 20x as many processors.
When I say that I think the problem is theoretically possible, I do also mean that I think significant speedups are theoretically possible. Homomorphic encryption is an unnecessarily huge hammer for this problem. Its just that no one has thought about the problem before (because cryptographers don’t care about unfriendly AI and people who care about friendly AI don’t think this approach is viable) so we have to co-opt a solution to a normal human problem.
I could include a complete discussion of these issues, and you are probably right that it would have made a more interesting article. Instead I initially assumed that anyone who didn’t know a lot about cryptography had little chance of learning it from a blog post (and people who did know wouldn’t appreciate the exposition), and I should post a convincing enough argument that I could resolve the usual concerns about AI’s in boxes. I think as it stands I am going to write a post about theoretically safe things to do with an unfriendly AI in a box, and then conditioned on that going well I may revisit the question of building a box in a more well thought out way. Until then I will perhaps add in a paragraph with some more clear exposition, as you suggest.
That sounds reasonable. I agree a complete discussion is probably too complicated, but it certainly seems a few simple examples of the sort I eventually gave would probably help most people understand—it certainly helped me, and I think many other people were puzzled, whereas with the simple examples I have now, I think (although I can’t be sure) I have a simplistic but essentially accurate idea of the possibilities.
I’m sorry if I sounded overly negative before: I definitely had problems with the post, but didn’t mean to negative about it.
If I were breaking down the post into several, I would probably do:
(i) the fact of holomorphic encryption’s (apparent) existence, how that can be used to to run algorithms on unknown data and a few theoretical applications of that, a mention that this is unlikely to be practical atm. That it can in principle be used to execute an unknown algorithm on unknown data, but that is really, really impractical, but might become more practical with some sort of parallel processing design. And at this point, I think most people would accept when you say it can be used to run an unfriendly AI.
(ii) If you like, more mathematical details, although this probably isn’t necessary
(iii) A discussion of friendlyness-testing, which wasn’t in the original premise, but is something people evidently want to think about
(iv) any other discussion of running an unfriendly AI safely
Does the uFAI have access to it’s own public key? If so, I think it can produce coherent output.
All it has to do is encrypt random things with the public key, until it finds something that produces coherent-looking ciphertext from it’s own perspective, and then there’s a two-way channel.
Homomorphic encryption has to be randomized. If you keep the randomness secret (as the security proof of homomorphic encryption requires) then the AI can’t possibly predict what the actual encryptions will look like because it doesn’t know anything about the randomness used to produce them.
This can be accomplished either by generating fresh quantum randomness for each encryption, or by using a psuedorandom generator which isn’t vulnerable to any passive side channel attack (certainly using fresh randomness is much safer).
Isn’t “encrypt random things with the public key, until it finds something that produces [some specific] ciphertext” exactly what encryption is supposed to prevent?? :)
You don’t quite understand homomorphic encryption. Property (b) is actually guaranteed in a much stronger sense than you recognize. In fact it is impossible for the AI to hack the computer in any way, or signal in any way to someone watching the program execution. This is an important issue in cryptography, but more complicated than a blog post can apparently communicate. The point is, absolutely NO information is leaked to someone without the secret key. You can’t even tell if the AI just gave up and is sitting there.
I agree that people ``just being curious″ is an issue (basically, the AI getting out of the box without even using a text channel). Honestly, once you have designed an uFAI you already have this problem if people just decide to go and run it on their own. However, I believe that in the future we will be able to resolve this problem in a limited sense by destroying the original private key and leaving a gimped private key which can only be used to decrypt a legitimate output.
“You don’t quite understand homomorphic encryption.”
I definitely DON’T understand it. However, I’m not sure you’re making it clear—is this something you work with, or just have read about? It’s obviously complicated to explain in a blog (though Schnier http://www.schneier.com/blog/archives/2009/07/homomorphic_enc.html does a good start) but it surely ought to be possible to make a start on explaining what’s possible and what’s not!
It’s clearly possible for Alice to have some data, Eve to have a large computer, and Alice to want to run a well-known algorithm on her data. It’s possible to translate the algorithm into a series of addition and multiplication of data, and thus have Eve run the algorithm on the encrypted data using her big computer, and send the result back to Alice. Eve never knows what the content is, and Alice gets the answer back (assuming she has a way of checking it).
However, if Alice has a secret algorithm and secret data, and wants Eve to run one on the other, is it possible? Is this analogous to our case?
Is the answer “in principle, yes, since applying an algorithm to data is itself an algorithm, so can be applied to an encryoted algorithm with encryopted data without ever knowing what the algortihm is”? But is it much less practical to do so, because each step involves evaluating much redundant data, or not?
I’m imagining, say, using the game of life to express an algorithm (or an AI). You can iterate a game of life set-up by combining each set of adjacent cells in a straight-forward way. However, you can’t use any of the normal optimisations on it, because you can’t easily tell what data is unchanged? Is any of this relevant? Is it better if you build dedicated hardware?
If my understanding is correct (that you do run with the program and data encrypted at all times) then I agree the the scheme does in theory work: the AI has no way of affecting its environment in a systematic way if it can’t predict how its data appears non-encrypted.
“I believe that in the future we will be able to resolve this problem in a limited sense by destroying the original private key and leaving a gimped private key which can only be used to decrypt a legitimate output.”
I mentioned this possibility, but had no idea if it would be plausible. Is this an inherent part of the cryptosystem, or something you hope it would be possible to design a cryptosystem to do? Do you have any idea if it IS possible?
To answer your question, since Paul apparently failed to—Paul reads crypto papers and spends non-trivial amounts of time thinking about how to design cryptographic protocols to attain various guarantees. I’m not sure if that constitutes ‘working with’ or not, but it presumably answers your question either way.
Thank you.
I’m sure I’m not making it clear. I may make a discussion post describing in much more detail exactly what is and is not possible in theoretical computational cryptography, because it is a really beautiful subject that might be independently interesting to many readers.
What you say in the last post seems precisely correct. You would use something much more efficient than the game of life. This makes any program run much, much slower. I have limited experience with computer architecture, but I would guess that a single operation on a modern processor would require about 300 homomorphic operations, many of which could be done in parallel. So optimistically you could hope that a program would run only a couple billion times slower using modern cryptography with dedicated hardware (it does parallelize really well though, for what its worth).
I think if it was concluded that quarantining code was useful, smart people could solve this problem with much, much better factors. I would not be surprised if you could make the code run say 100x slower but be provably secure in this sense.
The gimped key I mentioned is not a part of the cryptosystem. Again, I have high confidence that it could be worked out if smart people decided this problem was interesting and worked on it.
Thank you.
But if you’re running something vaguely related to a normal program, if the program wants to access memory location X, but you’re not supposed to know which memory location is accessed, doesn’t that mean you have to evaluate the memory write instruction in combination with every memory location for every instruction (whether or not that instruction is a memory write)?
So if your memory is—say -- 500 MB, that means the evaluation is at least 500,000,000 times slower? I agree there are probably some optimisations, but I’m leery of being able to run a traditional linear program at all. That’s why I suggested something massively parallel like game of life (although I agree that’s not actually designed for computation) -- if you have to evaluate all locations anyway, they might as well all do something useful. For instance, if your hypothetical AI was a simulation of a neural netowrk, that would be perfect, if you’re allowed to know the connections as part of the source code, you can evaluate each simulation neuron in combination with only the ones its connected with, which is barely less efficient than evaluating them linearly, since you have to evaluate all of them anyway.
I think that’s what my brain choked on when I imagined it the first time—yes, technically, 500,000,000 times slower is “possible in principle, but much slower”, but I didn’t realise that’s what you meant, hence my assumption that parts of the algorithm would not be encrypted. think if you rewrite this post to make the central point more accessible (which would make it a very interesting post), it would not be hard to put in this much explanation about how the encryption would actually be used.
Unless I’m wrong that that is necessary? It wouldn’t be hard, just something like “you can even run a whole program which is encrypted, even if you have to simulate every memory location. That’s obviously insanely impractical, but if we were to do this for real we could hopefully find a way of doing it that doesn’t require that.” I think that would be 10,000 times clearer than just repeatedly insisting “it’s theoretically possible” :)
Running a C program with 500MB of memory would undoubtedly be very slow. Current technologies would be even slower than you suspect for other reasons.
However, there are other languages than C , and for some this transformation is much, much more efficient. The game of life is at least parallel, but is also extremely inefficient. You could instead use some fundamentally parallel language which has all of the benefits of the game of life while actually being structured to do computation. Coincidentally, many AI researchers already do work in functional languages like Lisp, which are very well-suited to this transformation. Running a huge List program homomorphically might be practical if you had constant rate homomorphic encryption, which I suspect we will eventually. The slowdown could be something like 10x as slow, running on 20x as many processors.
When I say that I think the problem is theoretically possible, I do also mean that I think significant speedups are theoretically possible. Homomorphic encryption is an unnecessarily huge hammer for this problem. Its just that no one has thought about the problem before (because cryptographers don’t care about unfriendly AI and people who care about friendly AI don’t think this approach is viable) so we have to co-opt a solution to a normal human problem.
I could include a complete discussion of these issues, and you are probably right that it would have made a more interesting article. Instead I initially assumed that anyone who didn’t know a lot about cryptography had little chance of learning it from a blog post (and people who did know wouldn’t appreciate the exposition), and I should post a convincing enough argument that I could resolve the usual concerns about AI’s in boxes. I think as it stands I am going to write a post about theoretically safe things to do with an unfriendly AI in a box, and then conditioned on that going well I may revisit the question of building a box in a more well thought out way. Until then I will perhaps add in a paragraph with some more clear exposition, as you suggest.
That sounds reasonable. I agree a complete discussion is probably too complicated, but it certainly seems a few simple examples of the sort I eventually gave would probably help most people understand—it certainly helped me, and I think many other people were puzzled, whereas with the simple examples I have now, I think (although I can’t be sure) I have a simplistic but essentially accurate idea of the possibilities.
I’m sorry if I sounded overly negative before: I definitely had problems with the post, but didn’t mean to negative about it.
If I were breaking down the post into several, I would probably do:
(i) the fact of holomorphic encryption’s (apparent) existence, how that can be used to to run algorithms on unknown data and a few theoretical applications of that, a mention that this is unlikely to be practical atm. That it can in principle be used to execute an unknown algorithm on unknown data, but that is really, really impractical, but might become more practical with some sort of parallel processing design. And at this point, I think most people would accept when you say it can be used to run an unfriendly AI.
(ii) If you like, more mathematical details, although this probably isn’t necessary
(iii) A discussion of friendlyness-testing, which wasn’t in the original premise, but is something people evidently want to think about
(iv) any other discussion of running an unfriendly AI safely
Does the uFAI have access to it’s own public key? If so, I think it can produce coherent output.
All it has to do is encrypt random things with the public key, until it finds something that produces coherent-looking ciphertext from it’s own perspective, and then there’s a two-way channel.
Homomorphic encryption has to be randomized. If you keep the randomness secret (as the security proof of homomorphic encryption requires) then the AI can’t possibly predict what the actual encryptions will look like because it doesn’t know anything about the randomness used to produce them.
This can be accomplished either by generating fresh quantum randomness for each encryption, or by using a psuedorandom generator which isn’t vulnerable to any passive side channel attack (certainly using fresh randomness is much safer).
Isn’t “encrypt random things with the public key, until it finds something that produces [some specific] ciphertext” exactly what encryption is supposed to prevent?? :)
(Not all encryption, but commonly)
In RSA, it is easy to find a message whose encryption begins with “001001,” for example.