Hum, yes, indeed I got the P and V_i backwards, sorry.
The argument still holds, but with the other inversion between the \forall and the \exists :
such as forallP, then if P is pareto optimal, then P is a maximum of sum_{i=1}{i=n}c_itimesv_i .
Having an utility function means the weighting (the c_i) can vary between each individuals, but not between situations. If for each situation (“world history” more exactly) you chose a different set of coefficients, it’s no longer an utility function—and you can make about anything with that, just choosing the coefficients you want.
That doesn’t work, because v_i is defined as a mapping from P to the reals; if you change P, then you also change v_i, and so you can’t define them out of order.
I suspect you’re confusing p, the individual policies that an agent could adopt, and P, the complete collection of policies that the agent could adopt.
Another way to express the theorem is that there is a many-to-one mapping from choices of c_i to Pareto optimal policies that maximize that choice of c_i.
[Edit] It’s not strictly many-to-one, since you can choose c_is that make you indifferent between multiple Pareto optimal basic policies, but you recapture the many-to-one behavior if you massage your definition of “policy,” and it’s many-to-one for most choices of c_i.
Hum, yes, indeed I got the P and V_i backwards, sorry.
The argument still holds, but with the other inversion between the \forall and the \exists :
such as forallP, then if P is pareto optimal, then P is a maximum of sum_{i=1}{i=n}c_itimesv_i .Having an utility function means the weighting (the c_i) can vary between each individuals, but not between situations. If for each situation (“world history” more exactly) you chose a different set of coefficients, it’s no longer an utility function—and you can make about anything with that, just choosing the coefficients you want.
That doesn’t work, because v_i is defined as a mapping from P to the reals; if you change P, then you also change v_i, and so you can’t define them out of order.
I suspect you’re confusing p, the individual policies that an agent could adopt, and P, the complete collection of policies that the agent could adopt.
Another way to express the theorem is that there is a many-to-one mapping from choices of c_i to Pareto optimal policies that maximize that choice of c_i.
[Edit] It’s not strictly many-to-one, since you can choose c_is that make you indifferent between multiple Pareto optimal basic policies, but you recapture the many-to-one behavior if you massage your definition of “policy,” and it’s many-to-one for most choices of c_i.