You don’t necessarily need “explicit self-reference”. The difference in utility functions can also be obtained due to a difference in the location of the agent in the universe. Two identical worms placed in different locations will have different utility functions due to their atoms being not exactly in the same location, despite not having explicit self-reference. Similarly, in a computer simulation, the agents with the same source code will be called by the universe-program in different contexts (if they weren’t, I don’t see how it makes sense to even speak of them as being “different instances of the same source code”. There would just be one instance of the source code.).
So in fact, I think that this is probably a property of almost all possible agents. It seems to me that you need a very complex and specific ontological model in the agent to prevent these effects and have the two agents have the same utility function.
You don’t necessarily need “explicit self-reference”. The difference in utility functions can also be obtained due to a difference in the location of the agent in the universe. Two identical worms placed in different locations will have different utility functions due to their atoms being not exactly in the same location, despite not having explicit self-reference. Similarly, in a computer simulation, the agents with the same source code will be called by the universe-program in different contexts (if they weren’t, I don’t see how it makes sense to even speak of them as being “different instances of the same source code”. There would just be one instance of the source code.).
So in fact, I think that this is probably a property of almost all possible agents. It seems to me that you need a very complex and specific ontological model in the agent to prevent these effects and have the two agents have the same utility function.
Using the definitions from the post, those agents would be optimising the same utility functions, just by taking different actions.