6.7 Systems composed of rational agents need not maximize a utility function There is no canonical way to aggregate utilities over agents, and game theory shows that interacting sets of rational agents need not achieve even Pareto optimality.
Is [underlined] true? I know it’s true if you have agents following CDT, but does it still hold if agents follow FDT? (I think if you say ‘rational’ it should not mean ‘CDT’ since CDT is strictly worse than FDT).
Is [underlined] true? I know it’s true if you have agents following CDT, but does it still hold if agents follow FDT? (I think if you say ‘rational’ it should not mean ‘CDT’ since CDT is strictly worse than FDT).
Even with FDT, it’s not clear that two FDT agents cooperate in a one-shot prisoner’s dilemma—it depends upon their beliefs about each other.
Can you construct agents that are guaranteed to ‘achieve Pareto optimality’?