There is also a weird accident-of-history situation where all of the optimizers we’ve had for the last century are really single-objective optimizers at their core. The consequence of this has been that people have gotten in the habit of casting their optimization problems (mathematical, engineering, economic) in terms of a single-valued objective function, which is usually a simple weighted sum of the values of the objectives that they really care about.
To unpack my language choices briefly: when designing a vase, you care about its weight, its material cost, its strength, its radius, its height, possibly 50 other things including corrosion resistance and details of manufacturing complexity. To “optimize” the vase design, historically, you needed to come up with a function that smeared away the detail of the problem into one number, something like the “utility” of the vase design.
This is sort of terrible, if you think about it. You sacrifice resolution to make the problem easier to solve, but there’s a serious risk that you end up throwing away what you might have considered to be the global optimum when you do this. You also baked in something like a guess as to what the tradeoffs should be at the Pareto frontier prior to actually knowing what the solution would look like. You know you want the strongest, lightest, cheapest, largest, most beautiful vase, but you can’t have all those things at once, and you don’t really know how those factors trade off against each other until you’re able to hold the result in your hands and compare it to different “optimal” vases from slightly different manifolds. Of course, you can only do that if you accept that you are significantly uncertain about your preferences, meaning the design and optimization process should partly be viewed as an experiment aimed at uncovering your actual preferences regarding these design tradeoffs, which are a priori unknown.
The vase example is both a real example and also a metaphor for how considering humans as agents under the VNM paradigm is basically the same but possibly a million times worse. If you acknowledge the (true) assertion that you can’t really optimize a vase until you have a bunch of differently-optimal vases to examine in order to understand what you actually prefer and what tradeoffs you’re actually willing to make, you have to acknowledge that a human life, which is exponentially more complex, definitely cannot be usefully treated with such a tool.
As a final comment, there is almost a motte-bailey thing happening where Rationalists will say that, obviously, the VNM axioms describe the optimal framework in which to make decisions, and then proceed to never ever actually use the VNM axioms to make decisions.
There is also a weird accident-of-history situation where all of the optimizers we’ve had for the last century are really single-objective optimizers at their core. The consequence of this has been that people have gotten in the habit of casting their optimization problems (mathematical, engineering, economic) in terms of a single-valued objective function, which is usually a simple weighted sum of the values of the objectives that they really care about.
To unpack my language choices briefly: when designing a vase, you care about its weight, its material cost, its strength, its radius, its height, possibly 50 other things including corrosion resistance and details of manufacturing complexity. To “optimize” the vase design, historically, you needed to come up with a function that smeared away the detail of the problem into one number, something like the “utility” of the vase design.
This is sort of terrible, if you think about it. You sacrifice resolution to make the problem easier to solve, but there’s a serious risk that you end up throwing away what you might have considered to be the global optimum when you do this. You also baked in something like a guess as to what the tradeoffs should be at the Pareto frontier prior to actually knowing what the solution would look like. You know you want the strongest, lightest, cheapest, largest, most beautiful vase, but you can’t have all those things at once, and you don’t really know how those factors trade off against each other until you’re able to hold the result in your hands and compare it to different “optimal” vases from slightly different manifolds. Of course, you can only do that if you accept that you are significantly uncertain about your preferences, meaning the design and optimization process should partly be viewed as an experiment aimed at uncovering your actual preferences regarding these design tradeoffs, which are a priori unknown.
The vase example is both a real example and also a metaphor for how considering humans as agents under the VNM paradigm is basically the same but possibly a million times worse. If you acknowledge the (true) assertion that you can’t really optimize a vase until you have a bunch of differently-optimal vases to examine in order to understand what you actually prefer and what tradeoffs you’re actually willing to make, you have to acknowledge that a human life, which is exponentially more complex, definitely cannot be usefully treated with such a tool.
As a final comment, there is almost a motte-bailey thing happening where Rationalists will say that, obviously, the VNM axioms describe the optimal framework in which to make decisions, and then proceed to never ever actually use the VNM axioms to make decisions.
I agree. I love “Notes on the synthesis of form” by Christopher Alexander, as a math model of things near your vase example.