Additional to the purely mathematical problem stated above (but I preferred to make two different comments since they are on a totally different basis), I’ve a few problems with using such kind of reasoning for real life issues :
A Pareto optimal is a very weak condition. If you’ve a set of 10 values, and you’ve three possible outcomes : Outcome A is 100 for value 1, and 0 for all others. Outcome B is 99.99 for value 1, and 50 for all others. Outcome C is 99.98 for value 1, and 45 for all others. Outcome A and B are both equally Pareto optimal. But unless one value really trumps all the others, we would still prefer outcome C than outcome A, even if C isn’t a Pareto optimal. But having a decision algorithm that is guaranteed to chose a Pareto optimal doesn’t say if it’ll take A or B. And I prefer a decision algorithm that will select the non-Pareto-optimal C than one which will select the Pareto-optimal A, in the absence of one which is guaranteed to take B (which being Pareto-optimal doesn’t). (That’s also an issue I’ve with classical economics, btw.)
Since we don’t know all our values, nor how to precisely measure them (how do you evaluate the “welfare of mammals” ?), nor does the theorem give any method for selecting the coefficients, it is not very useful to take decisions on your daily life to know that such an utility function can exist. It is important when working on FAI, definitely. It may have some importance in reasoning about meta-ethics or policy making. But it’s very had to apply to real life decisions.
Insisting on Pareto optimality with respect to your values does not rule out all unreasonable policies, but it does rule out a large class of unreasonable policies, without ruling out any reasonable policies. It is true that the theorem doesn’t tell you what your coefficients should be, but figuring out that you need to have coefficients is a worthwhile step on its own.
Outcome A and B are both equally Pareto optimal. But unless one value really trumps all the others, we would still prefer outcome C than outcome A, even if C isn’t a Pareto optimal.
This fits fine within the framework. Suppose that value 1 truly is superior to value 2 (dropping the other 8): our aggregation is f1=x1. Then, outcome A, which is a pareto optimum, also maximizes f1, with a score of 100. Suppose that all values are equal: then our aggregation is f1=x1+x2. Then, outcome B, which has d2=149.99, is superior to C at 144.98, which is superior to A at 100.
What Pareto optimality means is that you cannot find an objective function with nonnegative weights such that outcome C is the best outcome. This is a feature; any method of choosing options which doesn’t choose a Pareto optimal point can be easily replaced by a method which does choose a Pareto optimal point, and so it’s a good thing that our linear combination cannot fail us that way.
And I prefer a decision algorithm that will select the non-Pareto-optimal C than one which will select the Pareto-optimal A, in the absence of one which is guaranteed to take B
Pareto optimality is defined in the presence of options that could be taken. If outcome B is off the table, then outcome C becomes Pareto optimal. If you prefer a system which prefers C to A, that’s a preference over the weights on the aggregating function which is easy to incorporate.
But it’s very had to apply to real life decisions.
Agreed. I don’t expect we’ll ever get perfect measurement of values (we don’t have perfect measurement of anything else), but a mediocre solution is still an improvement over a bad solution.
My point was that being Pareto-optimal is such a weak constraint that it doesn’t really mater, in real life, when choosing a decision algorithm. The best possible algorithm will be Pareto-optimal, sure. But that perfect algorithm is usually not an option—we don’t have infinite computing power, we don’t have perfect knowledge, we don’t have infinite time for the convergence towards the optimal to happen.
So when choosing between imperfect algorithms, one that is guaranteed to bring Pareto-optimal may not necessarily be better than one which doesn’t. An algorithm that is guaranteed to always select answer A or B, but that will tend to select answer A, may not be as good as an algorithm with will select answer C most of the time. For example, look how so many versions of utilitarianism will collapse with utility monsters. That’s a typical flaw of focusing on Pareto-optimal algorithm. More naïve ethical frameworks may not be Pareto-optimal, but they’ll not consider giving everything to the utility monster to be a sane output.
Pareto-optimality is not totally useless—it’s a good quality to have. But I think that we (especially economists) tend to give it much more value that it really has, it’s a very weak constraints, and a very weak indicator of the soundness of an ethical framework/policy making mechanism/algorithm.
My point was that being Pareto-optimal is such a weak constraint that it doesn’t really mater, in real life, when choosing a decision algorithm.
This does not agree with my professional experience; many real decisions are Pareto suboptimal with respect to terminal values.
But that perfect algorithm is usually not an option—we don’t have infinite computing power, we don’t have perfect knowledge, we don’t have infinite time for the convergence towards the optimal to happen.
What? This is the opposite of what you just said; this is “Pareto optimality is too strong a constraint for us to be able to find feasible solutions in practical time.”
I agree with you that Pareto optimality is insufficient, and also that Pareto optimality with respect to terminal values is not necessary. Note that choosing to end your maximization algorithm early, because you have an answer that’s “good enough,” is a Pareto optimal policy with respect to instrumental values!
I think that we understand VNM calculations well enough that most modern improvements in decision-making will come from superior methods of eliciting weights and utility functions. That said, VNM calculations are correct, should be implemented, and resistance to them is misguided. Treat measurement problems as measurement problems, not theoretical problems!
Additional to the purely mathematical problem stated above (but I preferred to make two different comments since they are on a totally different basis), I’ve a few problems with using such kind of reasoning for real life issues :
A Pareto optimal is a very weak condition. If you’ve a set of 10 values, and you’ve three possible outcomes : Outcome A is 100 for value 1, and 0 for all others. Outcome B is 99.99 for value 1, and 50 for all others. Outcome C is 99.98 for value 1, and 45 for all others. Outcome A and B are both equally Pareto optimal. But unless one value really trumps all the others, we would still prefer outcome C than outcome A, even if C isn’t a Pareto optimal. But having a decision algorithm that is guaranteed to chose a Pareto optimal doesn’t say if it’ll take A or B. And I prefer a decision algorithm that will select the non-Pareto-optimal C than one which will select the Pareto-optimal A, in the absence of one which is guaranteed to take B (which being Pareto-optimal doesn’t). (That’s also an issue I’ve with classical economics, btw.)
Since we don’t know all our values, nor how to precisely measure them (how do you evaluate the “welfare of mammals” ?), nor does the theorem give any method for selecting the coefficients, it is not very useful to take decisions on your daily life to know that such an utility function can exist. It is important when working on FAI, definitely. It may have some importance in reasoning about meta-ethics or policy making. But it’s very had to apply to real life decisions.
Insisting on Pareto optimality with respect to your values does not rule out all unreasonable policies, but it does rule out a large class of unreasonable policies, without ruling out any reasonable policies. It is true that the theorem doesn’t tell you what your coefficients should be, but figuring out that you need to have coefficients is a worthwhile step on its own.
This fits fine within the framework. Suppose that value 1 truly is superior to value 2 (dropping the other 8): our aggregation is f1=x1. Then, outcome A, which is a pareto optimum, also maximizes f1, with a score of 100. Suppose that all values are equal: then our aggregation is f1=x1+x2. Then, outcome B, which has d2=149.99, is superior to C at 144.98, which is superior to A at 100.
What Pareto optimality means is that you cannot find an objective function with nonnegative weights such that outcome C is the best outcome. This is a feature; any method of choosing options which doesn’t choose a Pareto optimal point can be easily replaced by a method which does choose a Pareto optimal point, and so it’s a good thing that our linear combination cannot fail us that way.
Pareto optimality is defined in the presence of options that could be taken. If outcome B is off the table, then outcome C becomes Pareto optimal. If you prefer a system which prefers C to A, that’s a preference over the weights on the aggregating function which is easy to incorporate.
Agreed. I don’t expect we’ll ever get perfect measurement of values (we don’t have perfect measurement of anything else), but a mediocre solution is still an improvement over a bad solution.
My point was that being Pareto-optimal is such a weak constraint that it doesn’t really mater, in real life, when choosing a decision algorithm. The best possible algorithm will be Pareto-optimal, sure. But that perfect algorithm is usually not an option—we don’t have infinite computing power, we don’t have perfect knowledge, we don’t have infinite time for the convergence towards the optimal to happen.
So when choosing between imperfect algorithms, one that is guaranteed to bring Pareto-optimal may not necessarily be better than one which doesn’t. An algorithm that is guaranteed to always select answer A or B, but that will tend to select answer A, may not be as good as an algorithm with will select answer C most of the time. For example, look how so many versions of utilitarianism will collapse with utility monsters. That’s a typical flaw of focusing on Pareto-optimal algorithm. More naïve ethical frameworks may not be Pareto-optimal, but they’ll not consider giving everything to the utility monster to be a sane output.
Pareto-optimality is not totally useless—it’s a good quality to have. But I think that we (especially economists) tend to give it much more value that it really has, it’s a very weak constraints, and a very weak indicator of the soundness of an ethical framework/policy making mechanism/algorithm.
This does not agree with my professional experience; many real decisions are Pareto suboptimal with respect to terminal values.
What? This is the opposite of what you just said; this is “Pareto optimality is too strong a constraint for us to be able to find feasible solutions in practical time.”
I agree with you that Pareto optimality is insufficient, and also that Pareto optimality with respect to terminal values is not necessary. Note that choosing to end your maximization algorithm early, because you have an answer that’s “good enough,” is a Pareto optimal policy with respect to instrumental values!
I think that we understand VNM calculations well enough that most modern improvements in decision-making will come from superior methods of eliciting weights and utility functions. That said, VNM calculations are correct, should be implemented, and resistance to them is misguided. Treat measurement problems as measurement problems, not theoretical problems!