The first point that seems relatively obvious to me is that all rational agents will intentionally mis-state their utility functions as extremes for bargaining purposes.
Because we’re working in an idealised hypothetical, we could decree that they can’t do this (they must all wear their true utility functions on their sleeves). I don’t see a disadvantage to demanding this.
Because we’re working in an idealised hypothetical, we could decree that they can’t do this (they must all wear their true utility functions on their sleeves). I don’t see a disadvantage to demanding this.