Yeah, I was using “derandomize” slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling “hardness vs. randomness”).
If poly size circuits can’t compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can’t compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can’t quite get BPP = P from P/poly < E.
Yeah, I was using “derandomize” slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling “hardness vs. randomness”).
If poly size circuits can’t compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can’t compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can’t quite get BPP = P from P/poly < E.