Yes, agents with different preferences are incentivised to cooperate provided that the cost of enforcing cooperation is less than the cost of conflict. Agreeing to adopt a shared utility function via acausal trade might potentially be a very cheap way to enforce cooperation, and some agents might do this just based on their prior. However, this is true for any agents with different preferences, not just agents of the type I described. You could use the same argument to say that you are in general unlikely to find two very intelligent agents with different utility functions.
Agents with identical source code will reason identically before seeing any observations, so the “acausal trade” in this case barely feels like trade at all, just making your preferences updateless over possible future observations. That’s much simpler than acausal trade between agents with different source code, which we can’t even formalize yet.
Yes, agents with different preferences are incentivised to cooperate provided that the cost of enforcing cooperation is less than the cost of conflict. Agreeing to adopt a shared utility function via acausal trade might potentially be a very cheap way to enforce cooperation, and some agents might do this just based on their prior. However, this is true for any agents with different preferences, not just agents of the type I described. You could use the same argument to say that you are in general unlikely to find two very intelligent agents with different utility functions.
Agents with identical source code will reason identically before seeing any observations, so the “acausal trade” in this case barely feels like trade at all, just making your preferences updateless over possible future observations. That’s much simpler than acausal trade between agents with different source code, which we can’t even formalize yet.