This seems really neat, but it seems quite sensitive to how one defines the worlds under consideration, and whether one counts slightly different worlds as actually distinct. Let me try to illustrate this with an example.
Suppose we have a W consisting of 7 worlds, W={A,B,C,D,X,Y,W}, with preferences A<B<C<D,X<Y<Z
and no other non-trivial preferences. Then (from the `sensible case’), I think we get the following utilities: A↦−3 X↦−2 B↦−1 Y↦0 C↦1 Z↦2 D↦3.
Suppose now that I create two new copies X′, X′′ of the world X which each differ by the position of a single atom, so as to give me (extremely weak!) preferences X′′<X′<X, so all the non-trivial preferences in the new W are now summarised as
A<B<C<D,X′′<X′<X<Y<Z.
Then the resulting utilities are (I think): X′′↦−4 A↦−3 X′↦−2 B↦−1 X↦0 C↦1 Y↦2 D↦3 Z↦4.
In particular, before adding in these ‘trivial copies’ we had U(Z)<U(D), and now we get U(D)<U(Z). Is this a problem? It depends on the situation, but to me it suggests that, if using this approach, one needs to be careful in how the worlds are specified, and the ‘fine-grainedness’ needs to be roughly the same everywhere.
Each partial preference is meant to represent a single mental model inside the human, with all preferences weighted the same (so there can’t be “extremely weak” preferences, compared with other preference in the same partial preference). Things like “increased income is better”, “more people smiling is better”, “being embarrassed on stage is the worse”.
We can imagine a partial preference with more internal structure, maybe internal weights, but I’d simply see that as two separate partial preferences. So we’d have the utilities you gave to A through to Z for one partial preference (actually, my formula doubles the numbers you gave), and X′′→−2, X′→0, X→2 for the other partial preference—which has a very low weight by assumption. So the order of Z and D is not affected.
EDIT: I’m pretty sure we can generalise my method for different weights of preferences, by changing the formula that sums the squares of utility difference.
(actually, my formula doubles the numbers you gave)
Are you sure? Suppose we take W=W1⊔W2 with W1={A,B,C,D}, W2={X,Y,Z}, then n1=3, so the values for W1 should be −3,−1,1,3 as I gave them. And similarly for W2, giving values −2,0,2. Or else I have mis-understood your definition?
I’d simply see that as two separate partial preferences
Just to be clear, by “separate partial preference” you mean a separate preorder, on a set of objects which may or may not have some overlap with the objects we considered so far? Then somehow the work is just postponed to the point where we try to combine partial preferences?
EDIT (in reply to your edit): I guess e.g. keeping conditions 1,2,3 the same and instead minimising g(G)=∑w←w′λw←w′(U(w′)−U(w))2,
where λw←w′∈R>0 is proportion to the reciprocal of the strength of the preference? Of course there are lots of variants on this!
Yep, sorry, I saw −3, −2, −1, etc… and concluded you weren’t doing the 2 jumps; my bad!
Then somehow the work is just postponed to the point where we try to combine partial preferences?
Yes. But unless we have other partial preferences or meta-preferences, then the only resonable way of combining them is just to add them, after weighting.
I like your reciprocal weighting formula. It seems to have good properties.
This seems really neat, but it seems quite sensitive to how one defines the worlds under consideration, and whether one counts slightly different worlds as actually distinct. Let me try to illustrate this with an example.
Suppose we have a W consisting of 7 worlds, W={A,B,C,D,X,Y,W}, with preferences
A<B<C<D,X<Y<Z and no other non-trivial preferences. Then (from the `sensible case’), I think we get the following utilities:
A↦−3
X↦−2
B↦−1
Y↦0
C↦1
Z↦2
D↦3.
Suppose now that I create two new copies X′, X′′ of the world X which each differ by the position of a single atom, so as to give me (extremely weak!) preferences X′′<X′<X, so all the non-trivial preferences in the new W are now summarised as A<B<C<D,X′′<X′<X<Y<Z.
Then the resulting utilities are (I think):
X′′↦−4
A↦−3
X′↦−2
B↦−1
X↦0
C↦1
Y↦2
D↦3
Z↦4.
In particular, before adding in these ‘trivial copies’ we had U(Z)<U(D), and now we get U(D)<U(Z). Is this a problem? It depends on the situation, but to me it suggests that, if using this approach, one needs to be careful in how the worlds are specified, and the ‘fine-grainedness’ needs to be roughly the same everywhere.
Each partial preference is meant to represent a single mental model inside the human, with all preferences weighted the same (so there can’t be “extremely weak” preferences, compared with other preference in the same partial preference). Things like “increased income is better”, “more people smiling is better”, “being embarrassed on stage is the worse”.
We can imagine a partial preference with more internal structure, maybe internal weights, but I’d simply see that as two separate partial preferences. So we’d have the utilities you gave to A through to Z for one partial preference (actually, my formula doubles the numbers you gave), and X′′→−2, X′→0, X→2 for the other partial preference—which has a very low weight by assumption. So the order of Z and D is not affected.
EDIT: I’m pretty sure we can generalise my method for different weights of preferences, by changing the formula that sums the squares of utility difference.
Are you sure? Suppose we take W=W1⊔W2 with W1={A,B,C,D}, W2={X,Y,Z}, then n1=3, so the values for W1 should be −3,−1,1,3 as I gave them. And similarly for W2, giving values −2,0,2. Or else I have mis-understood your definition?
Just to be clear, by “separate partial preference” you mean a separate preorder, on a set of objects which may or may not have some overlap with the objects we considered so far? Then somehow the work is just postponed to the point where we try to combine partial preferences?
EDIT (in reply to your edit): I guess e.g. keeping conditions 1,2,3 the same and instead minimising
g(G)=∑w←w′λw←w′(U(w′)−U(w))2,
where λw←w′∈R>0 is proportion to the reciprocal of the strength of the preference? Of course there are lots of variants on this!
Yep, sorry, I saw −3, −2, −1, etc… and concluded you weren’t doing the 2 jumps; my bad!
Yes. But unless we have other partial preferences or meta-preferences, then the only resonable way of combining them is just to add them, after weighting.
I like your reciprocal weighting formula. It seems to have good properties.