Thanks for the clarifications. It looks like I might be more of a proponent for the bargaining approach than you and Nick are at this point.
We don’t consider the bargaining/voting/market approach to be very plausible as a contender for a unique canonical answer, but as an approach to at least get the hard cases mostly right instead of remaining silent about them.
I think bargaining, or some of the ideas in bargaining theory (or improvements upon them), could be contenders for the canonical way of merging values (if not moral philosophies).
In the case you consider (which I find rather odd...) Nick and I would simply multiply it out.
Why? (And why do you find it odd, BTW?)
However, even if you looked at what our bargaining solution would do, it is not quite what you say.
I was implicitly assuming that this is the only decision (there are no future decisions), in which case the solution Nick described in his Overcoming Bias post does pick project B with certainty, I think. I know this glosses over some subtleties in your ideas, but my main goal was to highlight the difference between bargaining and linearly combining utility functions.
ETA: Also, if we make the probability of the sqrt utility function much smaller, like 10^-10, then the sqrt representative has very little chance of offering enough concessions on future decisions to get its way on this one, but it would still be the case that EU(A)>EU(B).
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
(And why do you find it odd, BTW?)
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whether you value that according to a particular function is descriptive uncertainty for all plausible theories. Also, while evaluative uncertainty does arise in self-interested cases, this example is a strange case of self-interest for reasons others have pointed out.
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it).
But what I do with negentropy largely depends on what I value, which I don’t know at this point...
Thanks for the clarifications. It looks like I might be more of a proponent for the bargaining approach than you and Nick are at this point.
I think bargaining, or some of the ideas in bargaining theory (or improvements upon them), could be contenders for the canonical way of merging values (if not moral philosophies).
Why? (And why do you find it odd, BTW?)
I was implicitly assuming that this is the only decision (there are no future decisions), in which case the solution Nick described in his Overcoming Bias post does pick project B with certainty, I think. I know this glosses over some subtleties in your ideas, but my main goal was to highlight the difference between bargaining and linearly combining utility functions.
ETA: Also, if we make the probability of the sqrt utility function much smaller, like 10^-10, then the sqrt representative has very little chance of offering enough concessions on future decisions to get its way on this one, but it would still be the case that EU(A)>EU(B).
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whether you value that according to a particular function is descriptive uncertainty for all plausible theories. Also, while evaluative uncertainty does arise in self-interested cases, this example is a strange case of self-interest for reasons others have pointed out.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
But what I do with negentropy largely depends on what I value, which I don’t know at this point...