One of my old CS teachers defended treating the environment as adversarial and knowing your source code, because of hackers. See median of 3 killers. (I’d link something, but besides a paper, I can’t find a nice link explaining what they are in a small amount of googling).
I don’t see why Yudkowsky makes superintelligence a requirement for this.
Also, it doesn’t even have to be source code they have access to (which they could if it was open-source software anyway). There are such things as disassemblers and decompilers.
[Edit: removed implication that Yudkowsky thought source code was necessary]
I don’t see why Yudkowsky makes superintelligence a requirement for this.
Because often when we talk about ‘worst-case’ inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don’t think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can make it. Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
I don’t think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved.
Yep! Original article said that this was a perfectly good assumption and a perfectly good reason for randomization in cryptography, paper-scissors-rock, or any other scenario where there is an actual adversary, because it is perfectly reasonable to use randomness to prevent an opponent from being intelligent.
Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
And what do you suggest to assume, instead?
Anyway, most proofs of asymptotic security in cryptography are conditional on conjectures such as “P != NP” or “f is a one-way function”.
One of my old CS teachers defended treating the environment as adversarial and knowing your source code, because of hackers. See median of 3 killers. (I’d link something, but besides a paper, I can’t find a nice link explaining what they are in a small amount of googling).
I don’t see why Yudkowsky makes superintelligence a requirement for this.
Also, it doesn’t even have to be source code they have access to (which they could if it was open-source software anyway). There are such things as disassemblers and decompilers.
[Edit: removed implication that Yudkowsky thought source code was necessary]
Because often when we talk about ‘worst-case’ inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don’t think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can make it. Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
Yep! Original article said that this was a perfectly good assumption and a perfectly good reason for randomization in cryptography, paper-scissors-rock, or any other scenario where there is an actual adversary, because it is perfectly reasonable to use randomness to prevent an opponent from being intelligent.
And what do you suggest to assume, instead?
Anyway, most proofs of asymptotic security in cryptography are conditional on conjectures such as “P != NP” or “f is a one-way function”.