This doesn’t take into account logical uncertainty. It’s easy to write a program that eventually computes the answer you want, and then pose a question of doing that more efficiently while provably retaining the same goal, which is essentially what you cited, with respect to a brute force classical inference system starting from ZFC and enumerating all theorems (and even this has its problems, as you know, since the agent could be controlling which answer is correct). A far more interesting question is which answer to name when you don’t have time to find the correct answer. “Correct” is merely a heuristic for when you have enough time to reflect on what to do.
(Also, even to prove theorems, you need operating hardware, and manging that hardware and other actions in the world would require decision-making under (logical) uncertainty. Even nontrivial self-optimization would require decision-making under uncertainty that has a “chance” of turning you from the correct question.)
Try to formalize this intuition. With provably correct answers, that’s easy. Here, you need a notion of “best answer I’ve got”, a way of comparing possible answers where correctness remains inaccessible. This makes it “more interesting”: where the first problem is solved (to an extent), this one isn’t.
This doesn’t take into account logical uncertainty. It’s easy to write a program that eventually computes the answer you want, and then pose a question of doing that more efficiently while provably retaining the same goal, which is essentially what you cited, with respect to a brute force classical inference system starting from ZFC and enumerating all theorems (and even this has its problems, as you know, since the agent could be controlling which answer is correct). A far more interesting question is which answer to name when you don’t have time to find the correct answer. “Correct” is merely a heuristic for when you have enough time to reflect on what to do.
(Also, even to prove theorems, you need operating hardware, and manging that hardware and other actions in the world would require decision-making under (logical) uncertainty. Even nontrivial self-optimization would require decision-making under uncertainty that has a “chance” of turning you from the correct question.)
What’s more interesting about it? Think for some time and then output the best answer you’ve got.
Try to formalize this intuition. With provably correct answers, that’s easy. Here, you need a notion of “best answer I’ve got”, a way of comparing possible answers where correctness remains inaccessible. This makes it “more interesting”: where the first problem is solved (to an extent), this one isn’t.