Each partial preference is meant to represent a single mental model inside the human, with all preferences weighted the same (so there can’t be “extremely weak” preferences, compared with other preference in the same partial preference). Things like “increased income is better”, “more people smiling is better”, “being embarrassed on stage is the worse”.
We can imagine a partial preference with more internal structure, maybe internal weights, but I’d simply see that as two separate partial preferences. So we’d have the utilities you gave to A through to Z for one partial preference (actually, my formula doubles the numbers you gave), and X′′→−2, X′→0, X→2 for the other partial preference—which has a very low weight by assumption. So the order of Z and D is not affected.
EDIT: I’m pretty sure we can generalise my method for different weights of preferences, by changing the formula that sums the squares of utility difference.
(actually, my formula doubles the numbers you gave)
Are you sure? Suppose we take W=W1⊔W2 with W1={A,B,C,D}, W2={X,Y,Z}, then n1=3, so the values for W1 should be −3,−1,1,3 as I gave them. And similarly for W2, giving values −2,0,2. Or else I have mis-understood your definition?
I’d simply see that as two separate partial preferences
Just to be clear, by “separate partial preference” you mean a separate preorder, on a set of objects which may or may not have some overlap with the objects we considered so far? Then somehow the work is just postponed to the point where we try to combine partial preferences?
EDIT (in reply to your edit): I guess e.g. keeping conditions 1,2,3 the same and instead minimising g(G)=∑w←w′λw←w′(U(w′)−U(w))2,
where λw←w′∈R>0 is proportion to the reciprocal of the strength of the preference? Of course there are lots of variants on this!
Yep, sorry, I saw −3, −2, −1, etc… and concluded you weren’t doing the 2 jumps; my bad!
Then somehow the work is just postponed to the point where we try to combine partial preferences?
Yes. But unless we have other partial preferences or meta-preferences, then the only resonable way of combining them is just to add them, after weighting.
I like your reciprocal weighting formula. It seems to have good properties.
Each partial preference is meant to represent a single mental model inside the human, with all preferences weighted the same (so there can’t be “extremely weak” preferences, compared with other preference in the same partial preference). Things like “increased income is better”, “more people smiling is better”, “being embarrassed on stage is the worse”.
We can imagine a partial preference with more internal structure, maybe internal weights, but I’d simply see that as two separate partial preferences. So we’d have the utilities you gave to A through to Z for one partial preference (actually, my formula doubles the numbers you gave), and X′′→−2, X′→0, X→2 for the other partial preference—which has a very low weight by assumption. So the order of Z and D is not affected.
EDIT: I’m pretty sure we can generalise my method for different weights of preferences, by changing the formula that sums the squares of utility difference.
Are you sure? Suppose we take W=W1⊔W2 with W1={A,B,C,D}, W2={X,Y,Z}, then n1=3, so the values for W1 should be −3,−1,1,3 as I gave them. And similarly for W2, giving values −2,0,2. Or else I have mis-understood your definition?
Just to be clear, by “separate partial preference” you mean a separate preorder, on a set of objects which may or may not have some overlap with the objects we considered so far? Then somehow the work is just postponed to the point where we try to combine partial preferences?
EDIT (in reply to your edit): I guess e.g. keeping conditions 1,2,3 the same and instead minimising
g(G)=∑w←w′λw←w′(U(w′)−U(w))2,
where λw←w′∈R>0 is proportion to the reciprocal of the strength of the preference? Of course there are lots of variants on this!
Yep, sorry, I saw −3, −2, −1, etc… and concluded you weren’t doing the 2 jumps; my bad!
Yes. But unless we have other partial preferences or meta-preferences, then the only resonable way of combining them is just to add them, after weighting.
I like your reciprocal weighting formula. It seems to have good properties.