If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.
What? No it’s not. There are no pseudo-random generators truly ultimately indistinguishable in principle from the ‘branch both ways’ operation in quantum mechanics, the computations all have much lower Kolmogorov complexity after running for a while. There are plenty of cryptographically strong pseudo-random number generators which could serve any possible role a cognitive algorithm could possibly demand for a source of bits guaranteed not to be expectedly correlated with other bits playing some functional role, especially if we add entropy from a classical thermal noise source, the oracular knowledge of which would violate the second law of thermodynamics. This is not an open problem. There is nothing left to be confused about.
A proof that any generator was indistinguishable from random, given the usual definitions, would basically be a proof that P != NP, so it is an open problem. However we’re pretty confident in practice that we have strong generators.
As a pedantic note, if you want to derandomize algorithms it is necessary (and sufficient) to assume P/poly != E, i.e. polynomial size circuits cannot compute all functions computed by exponential time computations. This is much weaker than P != NP, and is consistent with e.g. P = PSPACE. You don’t have to be able to fool an adversary, to fool yourself.
This is sometimes sloganized as “randomness never helps unless non-uniformity always helps,” since it is obvious that P << E and generally believed that P/poly is about as strong as P for “uniform” problems. It would be a big shock if P/Poly was so much bigger than P.
But of course, in the worlds where you can’t derandomize algorithms in the complexity-theoretic sense, you can still look up at the sky and use the whole universe to get your randomness. What this means is that you can exploit much of the stuff going on in the universe to do useful computation without lifting a finger, and since the universe is so much astronomically larger than the problems we care about, this is normally good enough. General derandomization is extremely interesting and important as a conceptual framework in complexity theory, but useless for actually computing things.
Yeah, I was using “derandomize” slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling “hardness vs. randomness”).
If poly size circuits can’t compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can’t compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can’t quite get BPP = P from P/poly < E.
Can you refer me to somewhere to read more about the “usual definitions” that would make this true? If I know the Turing machine, I can compare the output to that Turing machine and be pretty sure it’s not random after running the generator for a while. Or if the definition is just lack of expected correlation with bits playing a functional role, then that’s easy to get. What’s intermediate such that ‘indistinguishable’ randomness means P!=NP?
You don’t sound like you’re now much less confident you’re right about this, and I’m a bit surprised by that!
I got the ladder down so I could get down my copy of Goldreich’s “Foundations of Cryptography”, but I don’t quite feel like typing chunks out from it. Briefly, a pseudorandom generator is an algorithm that turns a small secret into a larger number of pseudorandom bits. It’s secure if every distinguisher’s advantage shrinks faster than the reciprocal of any polynomial function. Pseudorandom generators exist iff one-way functions exist, and if one-way functions exist then P != NP.
If you’re not familiar with PRGs, distinguishers, advantage, negligible functions etc I’d be happy to Skype you and give you a brief intro to these things.
If you’re not familiar with PRGs, distinguishers, advantage, negligible functions etc I’d be happy to Skype you and give you a brief intro to these things.
There are also intros available for free on Oded Goldreich’s FoC website.
Here’s my simplified intuitive explanation for people not interested in learning about these technical concepts. (Although of course they should!) Suppose you’re playing rock-paper-scissors with someone and you’re using a pseudorandom number generator, and P=NP, then your opponent could do the equivalent of trying all possible seeds to see which one would reproduce your pattern of play, and then use that to beat you every time.
In non-adversarial situations (which may be what Eliezer had in mind) you’d have to be pretty unlucky if your cognitive algorithm or environment happens to serve as a distinguisher for your pseudorandom generator, even if it’s technically distinguishable.
Okay, makes sense if you define “distinguishable from random” as “decodable with an amount of computation polynomial in the randseed size”.
EDIT: Confidence is about standard cryptographically strong randomness plus thermal noise being sufficient to prevent expected correlation with bits playing a functional role, which is all that could possibly be relevant to cognition.
For it to be an open problem, there would have to not be a proof either way. Since Eliezer is claiming (or, at least, implying) that there is a proof that there is no PRNG indistinguishable, arguing that there is no proof that there is a PRNG indistinguishable doesn’t show that it is an open problem.
Quite. They seem to be agreeing that any PRNG can in principle distinguished, and then Eliezer goes on to say that a mind is a place that will not be able to make that distinction—which ciphergoth didn’t begin to address.
You missed the key word “computationally”. Of course a pseudorandom generator is a mathematically distinct object, but not in a way that the universe is capable of knowing about (at least assuming that there are cryptographic pseudorandom generators that are secure against quantum adversaries, which I think most people believe).
How could “true randomness” be required, given that it’s computationally indistinguishable from pseudorandomness?
If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.
What? No it’s not. There are no pseudo-random generators truly ultimately indistinguishable in principle from the ‘branch both ways’ operation in quantum mechanics, the computations all have much lower Kolmogorov complexity after running for a while. There are plenty of cryptographically strong pseudo-random number generators which could serve any possible role a cognitive algorithm could possibly demand for a source of bits guaranteed not to be expectedly correlated with other bits playing some functional role, especially if we add entropy from a classical thermal noise source, the oracular knowledge of which would violate the second law of thermodynamics. This is not an open problem. There is nothing left to be confused about.
A proof that any generator was indistinguishable from random, given the usual definitions, would basically be a proof that P != NP, so it is an open problem. However we’re pretty confident in practice that we have strong generators.
As a pedantic note, if you want to derandomize algorithms it is necessary (and sufficient) to assume P/poly != E, i.e. polynomial size circuits cannot compute all functions computed by exponential time computations. This is much weaker than P != NP, and is consistent with e.g. P = PSPACE. You don’t have to be able to fool an adversary, to fool yourself.
This is sometimes sloganized as “randomness never helps unless non-uniformity always helps,” since it is obvious that P << E and generally believed that P/poly is about as strong as P for “uniform” problems. It would be a big shock if P/Poly was so much bigger than P.
But of course, in the worlds where you can’t derandomize algorithms in the complexity-theoretic sense, you can still look up at the sky and use the whole universe to get your randomness. What this means is that you can exploit much of the stuff going on in the universe to do useful computation without lifting a finger, and since the universe is so much astronomically larger than the problems we care about, this is normally good enough. General derandomization is extremely interesting and important as a conceptual framework in complexity theory, but useless for actually computing things.
Are you referring to this result? Doesn’t seem to be identical to what you said, but very close.
Yeah, I was using “derandomize” slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling “hardness vs. randomness”).
If poly size circuits can’t compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can’t compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can’t quite get BPP = P from P/poly < E.
Can you refer me to somewhere to read more about the “usual definitions” that would make this true? If I know the Turing machine, I can compare the output to that Turing machine and be pretty sure it’s not random after running the generator for a while. Or if the definition is just lack of expected correlation with bits playing a functional role, then that’s easy to get. What’s intermediate such that ‘indistinguishable’ randomness means P!=NP?
You don’t sound like you’re now much less confident you’re right about this, and I’m a bit surprised by that!
I got the ladder down so I could get down my copy of Goldreich’s “Foundations of Cryptography”, but I don’t quite feel like typing chunks out from it. Briefly, a pseudorandom generator is an algorithm that turns a small secret into a larger number of pseudorandom bits. It’s secure if every distinguisher’s advantage shrinks faster than the reciprocal of any polynomial function. Pseudorandom generators exist iff one-way functions exist, and if one-way functions exist then P != NP.
If you’re not familiar with PRGs, distinguishers, advantage, negligible functions etc I’d be happy to Skype you and give you a brief intro to these things.
There are also intros available for free on Oded Goldreich’s FoC website.
Here’s my simplified intuitive explanation for people not interested in learning about these technical concepts. (Although of course they should!) Suppose you’re playing rock-paper-scissors with someone and you’re using a pseudorandom number generator, and P=NP, then your opponent could do the equivalent of trying all possible seeds to see which one would reproduce your pattern of play, and then use that to beat you every time.
In non-adversarial situations (which may be what Eliezer had in mind) you’d have to be pretty unlucky if your cognitive algorithm or environment happens to serve as a distinguisher for your pseudorandom generator, even if it’s technically distinguishable.
Okay, makes sense if you define “distinguishable from random” as “decodable with an amount of computation polynomial in the randseed size”.
EDIT: Confidence is about standard cryptographically strong randomness plus thermal noise being sufficient to prevent expected correlation with bits playing a functional role, which is all that could possibly be relevant to cognition.
Decoding isn’t the challenge; the challenge is to make a guess whether you’re seeing the output of the PRG or random output. Your “advantage” is
Adv_PRG[Distinguisher] = P(Distinguisher[PRG[seed]] = “PRG”) - P(Distinguisher[True randomness] = “PRG”)
Note that this is standard notation when one discusses pseudorandom generators. Hence Ciphergoth’s comment about “the usual definitions.”
(Nods.)
For it to be an open problem, there would have to not be a proof either way. Since Eliezer is claiming (or, at least, implying) that there is a proof that there is no PRNG indistinguishable, arguing that there is no proof that there is a PRNG indistinguishable doesn’t show that it is an open problem.
Quite. They seem to be agreeing that any PRNG can in principle distinguished, and then Eliezer goes on to say that a mind is a place that will not be able to make that distinction—which ciphergoth didn’t begin to address.
You missed the key word “computationally”. Of course a pseudorandom generator is a mathematically distinct object, but not in a way that the universe is capable of knowing about (at least assuming that there are cryptographic pseudorandom generators that are secure against quantum adversaries, which I think most people believe).