It would be weird if the approximation to the idealized algorithm knew logical facts you didn’t. Perhaps it’s hard to update on the logical facts directly if it’s an opaque-enough algorithm, but there is apparently some reason to believe that the opaque algorithm approximates the O(2n) algorithm, and I suspect that this epistemic state allows one to learn the logical facts that the approximation knows.
That seems likely. Of course, learning those logical facts might take similarly unreasonable time.
Considering this has given me the intuition that, while pulling the information out into the overall inductor is probably possible, it will be a conflicting goal with making a variant inductor that runs efficiently. This might be avoidable, but my intuition is gesturing vaguely in the direction of P v. NP, EXPTIME v. NEXPTIME for why it is likely not to be.
It would be weird if the approximation to the idealized algorithm knew logical facts you didn’t. Perhaps it’s hard to update on the logical facts directly if it’s an opaque-enough algorithm, but there is apparently some reason to believe that the opaque algorithm approximates the O(2n) algorithm, and I suspect that this epistemic state allows one to learn the logical facts that the approximation knows.
That seems likely. Of course, learning those logical facts might take similarly unreasonable time.
Considering this has given me the intuition that, while pulling the information out into the overall inductor is probably possible, it will be a conflicting goal with making a variant inductor that runs efficiently. This might be avoidable, but my intuition is gesturing vaguely in the direction of P v. NP, EXPTIME v. NEXPTIME for why it is likely not to be.