Running a C program with 500MB of memory would undoubtedly be very slow. Current technologies would be even slower than you suspect for other reasons.
However, there are other languages than C , and for some this transformation is much, much more efficient. The game of life is at least parallel, but is also extremely inefficient. You could instead use some fundamentally parallel language which has all of the benefits of the game of life while actually being structured to do computation. Coincidentally, many AI researchers already do work in functional languages like Lisp, which are very well-suited to this transformation. Running a huge List program homomorphically might be practical if you had constant rate homomorphic encryption, which I suspect we will eventually. The slowdown could be something like 10x as slow, running on 20x as many processors.
When I say that I think the problem is theoretically possible, I do also mean that I think significant speedups are theoretically possible. Homomorphic encryption is an unnecessarily huge hammer for this problem. Its just that no one has thought about the problem before (because cryptographers don’t care about unfriendly AI and people who care about friendly AI don’t think this approach is viable) so we have to co-opt a solution to a normal human problem.
I could include a complete discussion of these issues, and you are probably right that it would have made a more interesting article. Instead I initially assumed that anyone who didn’t know a lot about cryptography had little chance of learning it from a blog post (and people who did know wouldn’t appreciate the exposition), and I should post a convincing enough argument that I could resolve the usual concerns about AI’s in boxes. I think as it stands I am going to write a post about theoretically safe things to do with an unfriendly AI in a box, and then conditioned on that going well I may revisit the question of building a box in a more well thought out way. Until then I will perhaps add in a paragraph with some more clear exposition, as you suggest.
That sounds reasonable. I agree a complete discussion is probably too complicated, but it certainly seems a few simple examples of the sort I eventually gave would probably help most people understand—it certainly helped me, and I think many other people were puzzled, whereas with the simple examples I have now, I think (although I can’t be sure) I have a simplistic but essentially accurate idea of the possibilities.
I’m sorry if I sounded overly negative before: I definitely had problems with the post, but didn’t mean to negative about it.
If I were breaking down the post into several, I would probably do:
(i) the fact of holomorphic encryption’s (apparent) existence, how that can be used to to run algorithms on unknown data and a few theoretical applications of that, a mention that this is unlikely to be practical atm. That it can in principle be used to execute an unknown algorithm on unknown data, but that is really, really impractical, but might become more practical with some sort of parallel processing design. And at this point, I think most people would accept when you say it can be used to run an unfriendly AI.
(ii) If you like, more mathematical details, although this probably isn’t necessary
(iii) A discussion of friendlyness-testing, which wasn’t in the original premise, but is something people evidently want to think about
(iv) any other discussion of running an unfriendly AI safely
Running a C program with 500MB of memory would undoubtedly be very slow. Current technologies would be even slower than you suspect for other reasons.
However, there are other languages than C , and for some this transformation is much, much more efficient. The game of life is at least parallel, but is also extremely inefficient. You could instead use some fundamentally parallel language which has all of the benefits of the game of life while actually being structured to do computation. Coincidentally, many AI researchers already do work in functional languages like Lisp, which are very well-suited to this transformation. Running a huge List program homomorphically might be practical if you had constant rate homomorphic encryption, which I suspect we will eventually. The slowdown could be something like 10x as slow, running on 20x as many processors.
When I say that I think the problem is theoretically possible, I do also mean that I think significant speedups are theoretically possible. Homomorphic encryption is an unnecessarily huge hammer for this problem. Its just that no one has thought about the problem before (because cryptographers don’t care about unfriendly AI and people who care about friendly AI don’t think this approach is viable) so we have to co-opt a solution to a normal human problem.
I could include a complete discussion of these issues, and you are probably right that it would have made a more interesting article. Instead I initially assumed that anyone who didn’t know a lot about cryptography had little chance of learning it from a blog post (and people who did know wouldn’t appreciate the exposition), and I should post a convincing enough argument that I could resolve the usual concerns about AI’s in boxes. I think as it stands I am going to write a post about theoretically safe things to do with an unfriendly AI in a box, and then conditioned on that going well I may revisit the question of building a box in a more well thought out way. Until then I will perhaps add in a paragraph with some more clear exposition, as you suggest.
That sounds reasonable. I agree a complete discussion is probably too complicated, but it certainly seems a few simple examples of the sort I eventually gave would probably help most people understand—it certainly helped me, and I think many other people were puzzled, whereas with the simple examples I have now, I think (although I can’t be sure) I have a simplistic but essentially accurate idea of the possibilities.
I’m sorry if I sounded overly negative before: I definitely had problems with the post, but didn’t mean to negative about it.
If I were breaking down the post into several, I would probably do:
(i) the fact of holomorphic encryption’s (apparent) existence, how that can be used to to run algorithms on unknown data and a few theoretical applications of that, a mention that this is unlikely to be practical atm. That it can in principle be used to execute an unknown algorithm on unknown data, but that is really, really impractical, but might become more practical with some sort of parallel processing design. And at this point, I think most people would accept when you say it can be used to run an unfriendly AI.
(ii) If you like, more mathematical details, although this probably isn’t necessary
(iii) A discussion of friendlyness-testing, which wasn’t in the original premise, but is something people evidently want to think about
(iv) any other discussion of running an unfriendly AI safely