Er, this normalisation system way well solve that problem entirely. If Ui prefers option oi (utility 1), with second choice o0 (utility 1/2), and all the other options as third choice (utility 0), then the expected utility of the random dictator is 1/n for all Ui (as p∗i gives utility 1, and p∗j gives utility 0 for all j≠i), so the normalised weighted utility to maximise is:
U=1n−1(U1+U2+…Un).
Using (n−1)U (because scaling doesn’t change expected utility decisions), the utility of any oi, i>0, is 1, while the utility of o0 is n/2. So if n>2, the compromise option o0 will get chosen.
Don’t confuse the problems of the random dictator, with the problems of maximising the weighted sum of the normalisations that used the random dictator (and don’t confuse the other way, either; the random dictator is immune to players’ lying, this normalisation is not).
I was aware, but addressing his objection as though it were justified, which it would be if this were the only place where the agent’s preferences matter. This counterfactual is supported by my fondness for linear logic.
Er, this normalisation system way well solve that problem entirely. If Ui prefers option oi (utility 1), with second choice o0 (utility 1/2), and all the other options as third choice (utility 0), then the expected utility of the random dictator is 1/n for all Ui (as p∗i gives utility 1, and p∗j gives utility 0 for all j≠i), so the normalised weighted utility to maximise is:
U=1n−1(U1+U2+…Un).
Using (n−1)U (because scaling doesn’t change expected utility decisions), the utility of any oi, i>0, is 1, while the utility of o0 is n/2. So if n>2, the compromise option o0 will get chosen.
Don’t confuse the problems of the random dictator, with the problems of maximising the weighted sum of the normalisations that used the random dictator (and don’t confuse the other way, either; the random dictator is immune to players’ lying, this normalisation is not).
I was aware, but addressing his objection as though it were justified, which it would be if this were the only place where the agent’s preferences matter. This counterfactual is supported by my fondness for linear logic.