I enjoyed reading and thinking about this, but now I wonder if it’s less interesting than it first appears. Tell me if I’m missing the point:
Since world is a function depending on the integers agent outputs, and not on agent itself, and since agent knows world’s source code, agent could do the following: instead of looking for proofs shorter than some fixed length that world outputs u if agent outputs a, it could just simulate world and tabulate the values world(1), world(2), world(3),… up to some fixed value world(n). Then output the a < n + 1 for which world(a) is largest.
In other words, agent can just ask world what’s the best it can get, then do that.
Granted there are situations in which it takes longer to compute world(n) than to prove that world(n) takes a certain value, but the reverse is also true (and I think more likely: the situations I can think of in which cousin it’s algorithm beats mine, resource-wise, are contrived). But cousin it is considering an idealized problem in which resources don’t matter, anyway.
Edit: If I’m wrong about this, I think it will be because the fact that world depends only on agent’s output is somehow not obvious to agent. But I can’t figure out how to state this carefully.
You’re not completely wrong, but the devil is in the details. The world’s dependence on the agent’s output may indeed be non-obvious, as in Nesov’s comment.
Are you saying there’s a devil in my details that makes it wrong? I don’t think so. Can you tell me a tricky thing that world can do that makes my code for agent worse than yours?
About “not obvious how world depends on agent’s output,” here’s a only very slightly more tricky thing agent can do, that is still not as tricky as looking for all proofs of length less than n. It can write a program agent-1 (is this what Nesov is getting at?) that always outputs 1, then compute world(agent-1). It can next write a program agent-2 that always outputs 2, then compute world(agent-2). On and on up to agent-en. Then have agent output the k for which world(agent-k) is largest. Since world does not depend on any agent’s source code, if agent outputs k then world(agent) = world(agent-k).
Again this is just a careful way of saying “ask world what’s the best agent can get, and do that.”
I enjoyed reading and thinking about this, but now I wonder if it’s less interesting than it first appears. Tell me if I’m missing the point:
Since world is a function depending on the integers agent outputs, and not on agent itself, and since agent knows world’s source code, agent could do the following: instead of looking for proofs shorter than some fixed length that world outputs u if agent outputs a, it could just simulate world and tabulate the values world(1), world(2), world(3),… up to some fixed value world(n). Then output the a < n + 1 for which world(a) is largest.
In other words, agent can just ask world what’s the best it can get, then do that.
Granted there are situations in which it takes longer to compute world(n) than to prove that world(n) takes a certain value, but the reverse is also true (and I think more likely: the situations I can think of in which cousin it’s algorithm beats mine, resource-wise, are contrived). But cousin it is considering an idealized problem in which resources don’t matter, anyway.
Edit: If I’m wrong about this, I think it will be because the fact that world depends only on agent’s output is somehow not obvious to agent. But I can’t figure out how to state this carefully.
You’re not completely wrong, but the devil is in the details. The world’s dependence on the agent’s output may indeed be non-obvious, as in Nesov’s comment.
Are you saying there’s a devil in my details that makes it wrong? I don’t think so. Can you tell me a tricky thing that world can do that makes my code for agent worse than yours?
About “not obvious how world depends on agent’s output,” here’s a only very slightly more tricky thing agent can do, that is still not as tricky as looking for all proofs of length less than n. It can write a program agent-1 (is this what Nesov is getting at?) that always outputs 1, then compute world(agent-1). It can next write a program agent-2 that always outputs 2, then compute world(agent-2). On and on up to agent-en. Then have agent output the k for which world(agent-k) is largest. Since world does not depend on any agent’s source code, if agent outputs k then world(agent) = world(agent-k).
Again this is just a careful way of saying “ask world what’s the best agent can get, and do that.”