modify their individual utility functions into some compromise utility function, in a mutually verifiable way, or equivalently to jointly construct a successor AI with the same compromise utility function and then hand over control of resources to the successor AI
This is precisely equivalent to Coasean efficiency, FWIW—indeed, correspondence with some “compromise” welfare function is what it means for an outcome to be efficient in this sense. It’s definitely the case that humans, and agents more generally, can face obstacles to achieving this, so that they’re limited to some constrained-efficient outcome—something that does maximize some welfare function, but only after taking some inevitable constraints into account!
(For instance, if the pricing of some commodity, service or whatever is bounded due to an information problem, so that “cheap” versions of it predominate, then the marginal rates of transformation won’t necessarily be equalized across agents. Agent A might put her endowment towards goal X, while agent B will use her own resources to pursue some goal
Y. But that’s a constraint that could in principle be well-defined—a transaction cost. Put them all together, and you’ll understand how these constraints determine what you lose to inefficiency—the “price of anarchy”, so to speak.)
This is precisely equivalent to Coasean efficiency, FWIW—indeed, correspondence with some “compromise” welfare function is what it means for an outcome to be efficient in this sense. It’s definitely the case that humans, and agents more generally, can face obstacles to achieving this, so that they’re limited to some constrained-efficient outcome—something that does maximize some welfare function, but only after taking some inevitable constraints into account!
(For instance, if the pricing of some commodity, service or whatever is bounded due to an information problem, so that “cheap” versions of it predominate, then the marginal rates of transformation won’t necessarily be equalized across agents. Agent A might put her endowment towards goal X, while agent B will use her own resources to pursue some goal Y. But that’s a constraint that could in principle be well-defined—a transaction cost. Put them all together, and you’ll understand how these constraints determine what you lose to inefficiency—the “price of anarchy”, so to speak.)
Strong upvote, very good to know
I internalised the meaning of these variables only to find you didn’t refer to them again. What was the point of this sentence.