Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it).
But what I do with negentropy largely depends on what I value, which I don’t know at this point...
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
But what I do with negentropy largely depends on what I value, which I don’t know at this point...