[Edit: the following example is bad. I might rewrite my thoughts about meta-preferentialism in the future, in which case I will write a better example and link to it here]
I did answer that question (albeit indirectly) but let me make it explicit.
Because of score voting the issue between total and average-aggregating is indeed dissolved (even with a fixed population)
Now I will note that in the case of the second problem score voting will also solve this the vast majority of the time, but let’s look at a (very) rare case where it would actually be a tie:
Alice and Bob want: Total (0,25), Average (1), Median (0)
Cindy and Dan want: Total (0,25), Average (0), Median (1)
And Elizabeth wants: Total (1), Average (0), Median (0)
So the final score is: Total (2), Average (2), Median (2)
(Note that for convenience I assume that this is with the ambivalence factor already calculated in)
In this case only one person is completely in favor of total with the others being lukewarm to it, but with a very strong split among the average-median question (Yes this is a very bizarre scenario)
Now numerically these all have the same preference, so the next question becomes: what do we pursue? This could be solved with a score vote too: How strong is your preference for:
(1) Picking one strategy at random (2) Pursuing all strategies 33% of the time (3) Picking the method that the least amount of people gave a zero (4) Only pursuing the methods that more than one person gave a 1 proportionally …etc, etc...
But what if, due to some unbelievable cosmic coincidence, that next vote also ends in a tie?
Well you go up one more level until either the ambivalence takes over (I doubt I would care after 5 levels of meta) or until there is a tie-breaker. Although it is technically possible to have a tie in an infinite amount of meta-levels, in reality this will never happen.
And yes you go as many levels of meta as needed to solve the problem. I only call it ‘meta-preference utilitarianism’ because ‘gauging-a-potentially-infinite-amount-of-meta-preferences utilitarianism’ isn’t quite as catchy.
[Edit: the following example is bad. I might rewrite my thoughts about meta-preferentialism in the future, in which case I will write a better example and link to it here]
I did answer that question (albeit indirectly) but let me make it explicit.
Because of score voting the issue between total and average-aggregating is indeed dissolved (even with a fixed population)
Now I will note that in the case of the second problem score voting will also solve this the vast majority of the time, but let’s look at a (very) rare case where it would actually be a tie:
Alice and Bob want: Total (0,25), Average (1), Median (0)
Cindy and Dan want: Total (0,25), Average (0), Median (1)
And Elizabeth wants: Total (1), Average (0), Median (0)
So the final score is: Total (2), Average (2), Median (2)
(Note that for convenience I assume that this is with the ambivalence factor already calculated in)
In this case only one person is completely in favor of total with the others being lukewarm to it, but with a very strong split among the average-median question (Yes this is a very bizarre scenario)
Now numerically these all have the same preference, so the next question becomes: what do we pursue? This could be solved with a score vote too: How strong is your preference for:
(1) Picking one strategy at random (2) Pursuing all strategies 33% of the time (3) Picking the method that the least amount of people gave a zero (4) Only pursuing the methods that more than one person gave a 1 proportionally …etc, etc...
But what if, due to some unbelievable cosmic coincidence, that next vote also ends in a tie?
Well you go up one more level until either the ambivalence takes over (I doubt I would care after 5 levels of meta) or until there is a tie-breaker. Although it is technically possible to have a tie in an infinite amount of meta-levels, in reality this will never happen.
And yes you go as many levels of meta as needed to solve the problem. I only call it ‘meta-preference utilitarianism’ because ‘gauging-a-potentially-infinite-amount-of-meta-preferences utilitarianism’ isn’t quite as catchy.