Encryption that an AI couldn’t break is the easy part. Just don’t do something dumb like with WEP.
Strongly disagree. Consider for example the possibility that our encryption relies on the difficulty in factoring large numbers and the AI finds a way of doing so efficiently. Just because human mathematicians haven’t succeeded at something doesn’t mean a smart AI won’t. Moreover, as far as many encryption related claims have been, in general we’ve been much too optimistic about the difficulty of breaking encryption. See for example Rivest’s famously incorrect estimate. In 1977, Rivest estimated that breaking RSA-129 would take around 10^15 years but it was broken less than 20 years later.
I strongly disagree. Not because I disagree with the super-intelligence factoring large primes part since I actually considered that as I was writing. Rather I assert that this does not warrant the conclusion that the encryption is the hard part. That means I am asserting a higher probability for humans proving that some suitable task is not solvable in Jupiter-Brain time is greater than successfully using this to prevent a super-intelligence from accessing a computer system. Not only does such a system contains millions of places for software and hardware errors but more importantly contains human parts. I included “some guy from marketing plugging one of the holes with his finger” for a reason.
Ah ok. Then we don’t disagree substantially. I just consider the two possibilities (problems with the encryption method, and error in implementation) to be roughly the same probability or close enough given the data we currently have that I can’t make a decent judgment on the matter, whereas you seem to think that the human error problem is substantially more likely.
This subject deserves a whole chapter of Harry Potter Fanfiction. The need for Constant Vigilance when guarding against an enemy that is resourceful, clever, more powerful and tireless. It would conclude with Mad Eye Moody getting killed. Constant Vigilance is futile when you are a human. The only option is to kill the enemy once and for all, to eliminate that dependence.
I mean, canon Harry Potter does that already—Mad Eye (the real one) is captured by Dark forces before we ever meet him, tortured routinely, and 2 or 3 years later is killed by them.
(And of course, canon Mad Eye had no chance of actually killing Voldemort once and for all, so Constant Vigilance was all he could do.)
More examples: (1) people have a history of reusing one-time pads (2) side-channel attacks. The latter is a big deal that doesn’t really fit the dichotomy.
Strongly disagree. Consider for example the possibility that our encryption relies on the difficulty in factoring large numbers and the AI finds a way of doing so efficiently. Just because human mathematicians haven’t succeeded at something doesn’t mean a smart AI won’t. Moreover, as far as many encryption related claims have been, in general we’ve been much too optimistic about the difficulty of breaking encryption. See for example Rivest’s famously incorrect estimate. In 1977, Rivest estimated that breaking RSA-129 would take around 10^15 years but it was broken less than 20 years later.
I strongly disagree. Not because I disagree with the super-intelligence factoring large primes part since I actually considered that as I was writing. Rather I assert that this does not warrant the conclusion that the encryption is the hard part. That means I am asserting a higher probability for humans proving that some suitable task is not solvable in Jupiter-Brain time is greater than successfully using this to prevent a super-intelligence from accessing a computer system. Not only does such a system contains millions of places for software and hardware errors but more importantly contains human parts. I included “some guy from marketing plugging one of the holes with his finger” for a reason.
Ah ok. Then we don’t disagree substantially. I just consider the two possibilities (problems with the encryption method, and error in implementation) to be roughly the same probability or close enough given the data we currently have that I can’t make a decent judgment on the matter, whereas you seem to think that the human error problem is substantially more likely.
Yes, it sounds like just a difference in degree.
This subject deserves a whole chapter of Harry Potter Fanfiction. The need for Constant Vigilance when guarding against an enemy that is resourceful, clever, more powerful and tireless. It would conclude with Mad Eye Moody getting killed. Constant Vigilance is futile when you are a human. The only option is to kill the enemy once and for all, to eliminate that dependence.
I don’t think MoR really needs a chapter on that.
I mean, canon Harry Potter does that already—Mad Eye (the real one) is captured by Dark forces before we ever meet him, tortured routinely, and 2 or 3 years later is killed by them.
(And of course, canon Mad Eye had no chance of actually killing Voldemort once and for all, so Constant Vigilance was all he could do.)
More examples: (1) people have a history of reusing one-time pads (2) side-channel attacks. The latter is a big deal that doesn’t really fit the dichotomy.