I don’t see why Yudkowsky makes superintelligence a requirement for this.
Because often when we talk about ‘worst-case’ inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don’t think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can make it. Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
I don’t think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved.
Yep! Original article said that this was a perfectly good assumption and a perfectly good reason for randomization in cryptography, paper-scissors-rock, or any other scenario where there is an actual adversary, because it is perfectly reasonable to use randomness to prevent an opponent from being intelligent.
Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
And what do you suggest to assume, instead?
Anyway, most proofs of asymptotic security in cryptography are conditional on conjectures such as “P != NP” or “f is a one-way function”.
Because often when we talk about ‘worst-case’ inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don’t think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can make it. Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
Yep! Original article said that this was a perfectly good assumption and a perfectly good reason for randomization in cryptography, paper-scissors-rock, or any other scenario where there is an actual adversary, because it is perfectly reasonable to use randomness to prevent an opponent from being intelligent.
And what do you suggest to assume, instead?
Anyway, most proofs of asymptotic security in cryptography are conditional on conjectures such as “P != NP” or “f is a one-way function”.