Yes. This solution is only “optimal” if world() depends only on the return value of agent(). If world() inspects the source code of agent(), or measures cycles, or anything like that, all bets are off—it becomes obvously impossible to write an agent() that works well for every possible world(), because world() could just special-case and penalize your solution.
Edit: You weren’t wrong! You identified an important issue that wasn’t clear enough in the post. This is the right kind of discussion we should be having: in what ways can we relax the restrictions on world() and still hope to have a general solution?
Am I right that if world’s output depends on the length of time agent spends thinking, then this solution breaks?
Edit: I guess “time spent thinking” is not a function of “agent”, and so world(agent) cannot depend on time spent thinking. Wrong?
Edit 2: Wrong per cousin’s comment. World does not even depend on agent, only on agent’s output.
Yes. This solution is only “optimal” if world() depends only on the return value of agent(). If world() inspects the source code of agent(), or measures cycles, or anything like that, all bets are off—it becomes obvously impossible to write an agent() that works well for every possible world(), because world() could just special-case and penalize your solution.
Edit: You weren’t wrong! You identified an important issue that wasn’t clear enough in the post. This is the right kind of discussion we should be having: in what ways can we relax the restrictions on world() and still hope to have a general solution?
One can interpret the phrase “world calls agent and returns utility” with different levels of obtuseness:
World looks up agent, examines it, runs it, sees what it’s output was and intermediate steps were, then decides what agent deserves.
World looks at a sheet of paper agent has written a number on. Analyzes handwriting. Then decides what agent deserves
World does not even analyze handwriting.
You mean 3, right? That’s all I meant by edit 2.
Yes, that’s right.