The analogue of P≠NP for randomized polynomial-time algorithms is actually NP≠RP (or equivalently NP⊈BPP), assuming that the intended meaning here is “NP-complete problems cannot be solved in polynomial time, even with randomized algorithms”.[1] More information about these classes are available here.
Also, a bit of a nitpick, but it looks like the claim that these widely-believed complexity-theoretic assumptions imply
that there is no way, in polynomial time, to get reliable information about A’s secret circuit C (beyond some statistical regularities) from looking at polynomially many input-output samples of C.
is implicitly claiming that a “learning problem” similar to the following is NP-hard:
There is a hidden function f that takes n bits as input and outputs one bit, and can be described with a polynomial-sized circuit. You are given some number of input-output pairs (xi,f(xi)). Find a polynomial-sized circuit C that is consistent with the given inputs and outputs.
It seems likely that this learning problem is indeed computationally difficult, given that the naive approach requires brute-force search through circuits. However, to actually prove NP-hardness, there needs to be a reduction from 3SAT (say) to that problem. I don’t see any straightforward approach to construct such a reduction, even though it is not entirely implausible that one exists (possibly with minor modifications to the problem statement).
I do agree with the conclusion though that the problem being discussed here is very different from what is being captured by statements like P≠NP.
The statement P≠BPP actually means “randomized polynomial-time algorithms cannot be ‘derandomized’ by replacing them with equivalent deterministic polynomial-time algorithms”.
Thank you for the nit, of course you’re correct about the NP analogue of BPP. I’ll edit.
And also thanks for operationalizing the generalized circuit “inversion problem” that’s analogous to the parity game described. This is exactly what I meant, but you put it better than I could have.
I think that if you restrict your class of circuits C to have some special form (i.e., “choose your architecture”) and also choose some specific collection of test inputs x_1, .., x_N, then you can make the code of C to depend on a secret boolean string, and have C(x_1), .., C(x_N) to return specific SAT instances of this boolean string; so a sufficiently “fiddly” version of this circuit reconstruction problem is equivalent to P vs. NP. However I think you’re right that hardness of the most natural version (in particular, where the inputs x_i are chosen at random) is probably closer to the existence of one-way functions than to P vs. NP.
Confirming that efficiently finding a small circuit (you don’t actually need further restrictions than size) based on its values on a fixed collection of test inputs is known to implyNP⊆BPP --- see this paper.
The analogue of P≠NP for randomized polynomial-time algorithms is actually NP≠RP (or equivalently NP⊈BPP), assuming that the intended meaning here is “NP-complete problems cannot be solved in polynomial time, even with randomized algorithms”.[1] More information about these classes are available here.
Also, a bit of a nitpick, but it looks like the claim that these widely-believed complexity-theoretic assumptions imply
is implicitly claiming that a “learning problem” similar to the following is NP-hard:
It seems likely that this learning problem is indeed computationally difficult, given that the naive approach requires brute-force search through circuits. However, to actually prove NP-hardness, there needs to be a reduction from 3SAT (say) to that problem. I don’t see any straightforward approach to construct such a reduction, even though it is not entirely implausible that one exists (possibly with minor modifications to the problem statement).
I do agree with the conclusion though that the problem being discussed here is very different from what is being captured by statements like P≠NP.
The statement P≠BPP actually means “randomized polynomial-time algorithms cannot be ‘derandomized’ by replacing them with equivalent deterministic polynomial-time algorithms”.
Thank you for the nit, of course you’re correct about the NP analogue of BPP. I’ll edit.
And also thanks for operationalizing the generalized circuit “inversion problem” that’s analogous to the parity game described. This is exactly what I meant, but you put it better than I could have.
I think that if you restrict your class of circuits C to have some special form (i.e., “choose your architecture”) and also choose some specific collection of test inputs x_1, .., x_N, then you can make the code of C to depend on a secret boolean string, and have C(x_1), .., C(x_N) to return specific SAT instances of this boolean string; so a sufficiently “fiddly” version of this circuit reconstruction problem is equivalent to P vs. NP. However I think you’re right that hardness of the most natural version (in particular, where the inputs x_i are chosen at random) is probably closer to the existence of one-way functions than to P vs. NP.
Confirming that efficiently finding a small circuit (you don’t actually need further restrictions than size) based on its values on a fixed collection of test inputs is known to imply NP⊆BPP --- see this paper.