Thanks for the post Wei, I have a couple of comments.
Firstly, the dichotomy between Robin’s approach and Nick and mine is not right. Nick and I have always been tempted to treat moral and descriptive uncertainty in exactly the same way insofar as this is possible. However, there are cases where this appears to be ill-defined (eg how much happiness for utilitarians is worth breaking a promise for Kantians?), and to deal with these cases Nick and I consider methods that are more generally applicable. We don’t consider the bargaining/voting/market approach to be very plausible as a contender for a unique canonical answer, but as an approach to at least get the hard cases mostly right instead of remaining silent about them.
In the case you consider (which I find rather odd...) Nick and I would simply multiply it out. However, even if you looked at what our bargaining solution would do, it is not quite what you say. One thing we know is that simple majoritarianism doesn’t work (it is equivalent to picking the theory with the highest credence in two-theory cases). We would prefer to use a random dictator model, or allow bargaining over future situations too, or all conceivable situations, such that the proponent of the square-root view would be willing to offer to capitulate in most future votes in order to win this one.
Thanks for the clarifications. It looks like I might be more of a proponent for the bargaining approach than you and Nick are at this point.
We don’t consider the bargaining/voting/market approach to be very plausible as a contender for a unique canonical answer, but as an approach to at least get the hard cases mostly right instead of remaining silent about them.
I think bargaining, or some of the ideas in bargaining theory (or improvements upon them), could be contenders for the canonical way of merging values (if not moral philosophies).
In the case you consider (which I find rather odd...) Nick and I would simply multiply it out.
Why? (And why do you find it odd, BTW?)
However, even if you looked at what our bargaining solution would do, it is not quite what you say.
I was implicitly assuming that this is the only decision (there are no future decisions), in which case the solution Nick described in his Overcoming Bias post does pick project B with certainty, I think. I know this glosses over some subtleties in your ideas, but my main goal was to highlight the difference between bargaining and linearly combining utility functions.
ETA: Also, if we make the probability of the sqrt utility function much smaller, like 10^-10, then the sqrt representative has very little chance of offering enough concessions on future decisions to get its way on this one, but it would still be the case that EU(A)>EU(B).
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
(And why do you find it odd, BTW?)
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whether you value that according to a particular function is descriptive uncertainty for all plausible theories. Also, while evaluative uncertainty does arise in self-interested cases, this example is a strange case of self-interest for reasons others have pointed out.
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it).
But what I do with negentropy largely depends on what I value, which I don’t know at this point...
Thanks for the post Wei, I have a couple of comments.
Firstly, the dichotomy between Robin’s approach and Nick and mine is not right. Nick and I have always been tempted to treat moral and descriptive uncertainty in exactly the same way insofar as this is possible. However, there are cases where this appears to be ill-defined (eg how much happiness for utilitarians is worth breaking a promise for Kantians?), and to deal with these cases Nick and I consider methods that are more generally applicable. We don’t consider the bargaining/voting/market approach to be very plausible as a contender for a unique canonical answer, but as an approach to at least get the hard cases mostly right instead of remaining silent about them.
In the case you consider (which I find rather odd...) Nick and I would simply multiply it out. However, even if you looked at what our bargaining solution would do, it is not quite what you say. One thing we know is that simple majoritarianism doesn’t work (it is equivalent to picking the theory with the highest credence in two-theory cases). We would prefer to use a random dictator model, or allow bargaining over future situations too, or all conceivable situations, such that the proponent of the square-root view would be willing to offer to capitulate in most future votes in order to win this one.
Thanks for the clarifications. It looks like I might be more of a proponent for the bargaining approach than you and Nick are at this point.
I think bargaining, or some of the ideas in bargaining theory (or improvements upon them), could be contenders for the canonical way of merging values (if not moral philosophies).
Why? (And why do you find it odd, BTW?)
I was implicitly assuming that this is the only decision (there are no future decisions), in which case the solution Nick described in his Overcoming Bias post does pick project B with certainty, I think. I know this glosses over some subtleties in your ideas, but my main goal was to highlight the difference between bargaining and linearly combining utility functions.
ETA: Also, if we make the probability of the sqrt utility function much smaller, like 10^-10, then the sqrt representative has very little chance of offering enough concessions on future decisions to get its way on this one, but it would still be the case that EU(A)>EU(B).
Whenever you deviate from maximizing expected value (in contexts where this is possible) you can normally find examples where this behaviour looks incorrect. For example, we might be value-pumped or something.
For one thing, negentropy may well be one of the most generally useful resources, but it seems somewhat unlikely to be intrinsically good (more likely it matters what you do with it). Thus, the question looks like one of descriptive uncertainty, just as if you had asked about money: uncertainty about whether you value that according to a particular function is descriptive uncertainty for all plausible theories. Also, while evaluative uncertainty does arise in self-interested cases, this example is a strange case of self-interest for reasons others have pointed out.
Can a bargaining solution be value-pumped? My intuition says if it can, then the delegates would choose a different solution. (This seems like an interesting question to look into in more detail though.) But doesn’t your answer also argue against using the bargaining solution in moral uncertainty, and in favor of just sticking with expected utility maximization (and throwing away other incompatible moral philosophies that might be value-pumped)?
But what I do with negentropy largely depends on what I value, which I don’t know at this point...