I can see an interpretation of “idealized agent” under which it would make sense to model an algorithm you don’t fully understand as a presumed-hostile agent acting on logical information you do not know. Say, because the idealized agent is bounded and would take O(2n) time to solve a problem, and the partially-understood algorithm approximates it, with small but unknown bias, in O(n2) time.
It would be weird if the approximation to the idealized algorithm knew logical facts you didn’t. Perhaps it’s hard to update on the logical facts directly if it’s an opaque-enough algorithm, but there is apparently some reason to believe that the opaque algorithm approximates the O(2n) algorithm, and I suspect that this epistemic state allows one to learn the logical facts that the approximation knows.
That seems likely. Of course, learning those logical facts might take similarly unreasonable time.
Considering this has given me the intuition that, while pulling the information out into the overall inductor is probably possible, it will be a conflicting goal with making a variant inductor that runs efficiently. This might be avoidable, but my intuition is gesturing vaguely in the direction of P v. NP, EXPTIME v. NEXPTIME for why it is likely not to be.
I can see an interpretation of “idealized agent” under which it would make sense to model an algorithm you don’t fully understand as a presumed-hostile agent acting on logical information you do not know. Say, because the idealized agent is bounded and would take O(2n) time to solve a problem, and the partially-understood algorithm approximates it, with small but unknown bias, in O(n2) time.
It would be weird if the approximation to the idealized algorithm knew logical facts you didn’t. Perhaps it’s hard to update on the logical facts directly if it’s an opaque-enough algorithm, but there is apparently some reason to believe that the opaque algorithm approximates the O(2n) algorithm, and I suspect that this epistemic state allows one to learn the logical facts that the approximation knows.
That seems likely. Of course, learning those logical facts might take similarly unreasonable time.
Considering this has given me the intuition that, while pulling the information out into the overall inductor is probably possible, it will be a conflicting goal with making a variant inductor that runs efficiently. This might be avoidable, but my intuition is gesturing vaguely in the direction of P v. NP, EXPTIME v. NEXPTIME for why it is likely not to be.