Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
(And why do you find it odd, BTW?)
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whether you value that according to a particular function is descriptive uncertainty for all plausible theories. Also, while evaluative uncertainty does arise in self-interested cases, this example is a strange case of self-interest for reasons others have pointed out.
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it).
But what I do with negentropy largely depends on what I value, which I don’t know at this point...
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whether you value that according to a particular function is descriptive uncertainty for all plausible theories. Also, while evaluative uncertainty does arise in self-interested cases, this example is a strange case of self-interest for reasons others have pointed out.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
But what I do with negentropy largely depends on what I value, which I don’t know at this point...