I don’t have a proof of that claim, either, just a strong intuition. I should have specified that more clearly. If some other LWer has a proof in mind, I’d love to see it.
Here are some related things I do have proofs for.
There’s no general procedure for figuring out whether two programs do the same thing—this follows from Rice’s theorem. (In fact, this is undecidable for arbitrary context-free languages, let alone more general programs.)
For any given problem, there is some set of asymptotically optimal algorithms, in terms of space or time complexity. And a simulator can’t improve on that bound by more than a constant factor. So if you have an optimal algorithm for something, no AI can improve on that.
Now suppose I give you an arbitrary program and promise that there’s a more efficient program that produces the same output. There can’t be a general way to find the optimal program that produces the answer.*
Proof by reduction from the halting problem: Suppose we had a an oracle for minimizing programs. For an arbitrary Turing machine T and input I, create a program P that ignores its input and simulates T on I. If we could minimize this program, we’d have a halting oracle.)
I agree.
But if you’re saying that it’s proven that “you have to …”, I wasn’t aware of that.
I don’t have a proof of that claim, either, just a strong intuition. I should have specified that more clearly. If some other LWer has a proof in mind, I’d love to see it.
Here are some related things I do have proofs for.
There’s no general procedure for figuring out whether two programs do the same thing—this follows from Rice’s theorem. (In fact, this is undecidable for arbitrary context-free languages, let alone more general programs.)
For any given problem, there is some set of asymptotically optimal algorithms, in terms of space or time complexity. And a simulator can’t improve on that bound by more than a constant factor. So if you have an optimal algorithm for something, no AI can improve on that.
Now suppose I give you an arbitrary program and promise that there’s a more efficient program that produces the same output. There can’t be a general way to find the optimal program that produces the answer.*
Proof by reduction from the halting problem: Suppose we had a an oracle for minimizing programs. For an arbitrary Turing machine T and input I, create a program P that ignores its input and simulates T on I. If we could minimize this program, we’d have a halting oracle.)
When you say “optimal” you mean space or time used.
When you say “minimize” you mean “shortest equivalent program”?
I love this list of undecidable problems.
I meant to minimize in terms of asympototic time or space complexity, not length-of-program.