If agent() is actually agent(‘source of world’) as the classical newcomb problem has it, I fail to see what is wrong with simply enumerating the possible actions and simulating the ‘source of world’ with the constant call of agent(‘source of world’) replaced by the current action candidate? And then returning the action with maximum payoff obviously.
The only difference I can see between “an agent which knows the world program it’s working with” and “agent(‘source of world’)” is that the latter agent can be more general.
A prior distribution about possible states of the world, which is what you’d want to pass outside of toy-universe examples, is rather clearly part of the agent rather than a parameter.
Yes, in a sense. (Although technically, the agent could know facts about the world program that can’t be algorithmically or before-timeout inferred just from the program, and ditto for agent’s own program, but that’s a fine point.)
If agent() is actually agent(‘source of world’) as the classical newcomb problem has it, I fail to see what is wrong with simply enumerating the possible actions and simulating the ‘source of world’ with the constant call of agent(‘source of world’) replaced by the current action candidate? And then returning the action with maximum payoff obviously.
See world2(). Also, the agent takes no parameters, it just knows the world program it’s working with.
The only difference I can see between “an agent which knows the world program it’s working with” and “agent(‘source of world’)” is that the latter agent can be more general.
A prior distribution about possible states of the world, which is what you’d want to pass outside of toy-universe examples, is rather clearly part of the agent rather than a parameter.
Yes, in a sense. (Although technically, the agent could know facts about the world program that can’t be algorithmically or before-timeout inferred just from the program, and ditto for agent’s own program, but that’s a fine point.)