There is also a weird accident-of-history situation where all of the optimizers we’ve had for the last century are really single-objective optimizers at their core. The consequence of this has been that people have gotten in the habit of casting their optimization problems (mathematical, engineering, economic) in terms of a single-valued objective function, which is usually a simple weighted sum of the values of the objectives that they really care about.
To unpack my language choices briefly: when designing a vase, you care about its weight, its material cost, its strength, its radius, its height, possibly 50 other things including corrosion resistance and details of manufacturing complexity. To “optimize” the vase design, historically, you needed to come up with a function that smeared away the detail of the problem into one number, something like the “utility” of the vase design.
This is sort of terrible, if you think about it. You sacrifice resolution to make the problem easier to solve, but there’s a serious risk that you end up throwing away what you might have considered to be the global optimum when you do this. You also baked in something like a guess as to what the tradeoffs should be at the Pareto frontier prior to actually knowing what the solution would look like. You know you want the strongest, lightest, cheapest, largest, most beautiful vase, but you can’t have all those things at once, and you don’t really know how those factors trade off against each other until you’re able to hold the result in your hands and compare it to different “optimal” vases from slightly different manifolds. Of course, you can only do that if you accept that you are significantly uncertain about your preferences, meaning the design and optimization process should partly be viewed as an experiment aimed at uncovering your actual preferences regarding these design tradeoffs, which are a priori unknown.
The vase example is both a real example and also a metaphor for how considering humans as agents under the VNM paradigm is basically the same but possibly a million times worse. If you acknowledge the (true) assertion that you can’t really optimize a vase until you have a bunch of differently-optimal vases to examine in order to understand what you actually prefer and what tradeoffs you’re actually willing to make, you have to acknowledge that a human life, which is exponentially more complex, definitely cannot be usefully treated with such a tool.
As a final comment, there is almost a motte-bailey thing happening where Rationalists will say that, obviously, the VNM axioms describe the optimal framework in which to make decisions, and then proceed to never ever actually use the VNM axioms to make decisions.
As a final comment, there is almost a motte-bailey thing happening where Rationalists will say that, obviously, the VNM axioms describe the optimal framework in which to make decisions, and then proceed to never ever actually use the VNM axioms to make decisions.
This is a misunderstanding. The vNM axioms constrain the shape of an agent’s preferences, they say nothing about how to make decisions, and I don’t think many people ever claimed this—maybe nobody ever claimed this[1]? The vNM axioms specify that your utilities should be linear in probability your utility indifference currves should form parallel hyperplanes in all dimensions of the probability simplex on the available options. That’s it. Preferences conforming to the vNM axioms may be necessary for making “good” (i.e. unexploitable) decisions, but not sufficient.
A similar misunderstanding would be to say “Rationalists will say that, obviously, the Peano axioms describe the optimal framework in which to perform arithmetic, and then proceed to never ever actually invoke any of the Peano axioms when doing their taxes.”
I do agree that there was an implicit promise of “this piece of math will be applicable to your life”, which was fulfilled less for the vNM axioms than for Bayes theorem.
The vNM axioms specify that your utilities should be linear in probability. That’s it.
I don’t think this is right. You are perhaps thinking of the continuity axiom here? But the completeness axiom is not about this (indeed, one cannot even construct a unique utility function to represent incomplete preferences, so there is nothing which may be linear or non-linear in probability).
The vNM axioms constrain the shape of an agent’s preferences, they say nothing about how to make decisions
Suppose your decision in a particular situation comes down to choosing between some number of lotteries (with specific estimated probabilities over their outcomes) and there’s no complexity/nuance/tricks on top of that. In that case, vNM says that you should choose the one with the highest expected utility as this is the one you prefer the most.
At least assuming that choice is the right operationalization of preferences but if it isn’t, then the Dutch book / money-pump arguments don’t follow.
ETA: I guess I could just say:
What are your preferences if not your idealized evaluations of decision-worthiness of options (modulo “being a corrupted piece of software running on corrupted hardware”)?
There is also a weird accident-of-history situation where all of the optimizers we’ve had for the last century are really single-objective optimizers at their core. The consequence of this has been that people have gotten in the habit of casting their optimization problems (mathematical, engineering, economic) in terms of a single-valued objective function, which is usually a simple weighted sum of the values of the objectives that they really care about.
To unpack my language choices briefly: when designing a vase, you care about its weight, its material cost, its strength, its radius, its height, possibly 50 other things including corrosion resistance and details of manufacturing complexity. To “optimize” the vase design, historically, you needed to come up with a function that smeared away the detail of the problem into one number, something like the “utility” of the vase design.
This is sort of terrible, if you think about it. You sacrifice resolution to make the problem easier to solve, but there’s a serious risk that you end up throwing away what you might have considered to be the global optimum when you do this. You also baked in something like a guess as to what the tradeoffs should be at the Pareto frontier prior to actually knowing what the solution would look like. You know you want the strongest, lightest, cheapest, largest, most beautiful vase, but you can’t have all those things at once, and you don’t really know how those factors trade off against each other until you’re able to hold the result in your hands and compare it to different “optimal” vases from slightly different manifolds. Of course, you can only do that if you accept that you are significantly uncertain about your preferences, meaning the design and optimization process should partly be viewed as an experiment aimed at uncovering your actual preferences regarding these design tradeoffs, which are a priori unknown.
The vase example is both a real example and also a metaphor for how considering humans as agents under the VNM paradigm is basically the same but possibly a million times worse. If you acknowledge the (true) assertion that you can’t really optimize a vase until you have a bunch of differently-optimal vases to examine in order to understand what you actually prefer and what tradeoffs you’re actually willing to make, you have to acknowledge that a human life, which is exponentially more complex, definitely cannot be usefully treated with such a tool.
As a final comment, there is almost a motte-bailey thing happening where Rationalists will say that, obviously, the VNM axioms describe the optimal framework in which to make decisions, and then proceed to never ever actually use the VNM axioms to make decisions.
I agree. I love “Notes on the synthesis of form” by Christopher Alexander, as a math model of things near your vase example.
This is a misunderstanding. The vNM axioms constrain the shape of an agent’s preferences, they say nothing about how to make decisions, and I don’t think many people ever claimed this—maybe nobody ever claimed this[1]? The vNM axioms specify that
your utilities should be linear in probabilityyour utility indifference currves should form parallel hyperplanes in all dimensions of the probability simplex on the available options. That’s it. Preferences conforming to the vNM axioms may be necessary for making “good” (i.e. unexploitable) decisions, but not sufficient.A similar misunderstanding would be to say “Rationalists will say that, obviously, the Peano axioms describe the optimal framework in which to perform arithmetic, and then proceed to never ever actually invoke any of the Peano axioms when doing their taxes.”
I do agree that there was an implicit promise of “this piece of math will be applicable to your life”, which was fulfilled less for the vNM axioms than for Bayes theorem.
I welcome examples where people claimed this.
I don’t think this is right. You are perhaps thinking of the continuity axiom here? But the completeness axiom is not about this (indeed, one cannot even construct a unique utility function to represent incomplete preferences, so there is nothing which may be linear or non-linear in probability).
Oops, you’re of course right. I’ll change my comment.
Suppose your decision in a particular situation comes down to choosing between some number of lotteries (with specific estimated probabilities over their outcomes) and there’s no complexity/nuance/tricks on top of that. In that case, vNM says that you should choose the one with the highest expected utility as this is the one you prefer the most.
At least assuming that choice is the right operationalization of preferences but if it isn’t, then the Dutch book / money-pump arguments don’t follow.
ETA: I guess I could just say: