As I pointed out in that thread, their solution doesn’t work. You would need to choose an aggregation mechanism to combine votes. Different mechanisms will cause different systematic outcomes. Notably, some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes (much as, eg., the American system always chooses the most popular candidate, resulting in a 2-party system equilibrium; many European systems allocate seats proportionally to votes, allowing equilibria with more than 2 parties.)
You need to choose which kind of outcome you prefer in order to choose your aggregation mechanism, in order to implement their solution. But if you could do that, you wouldn’t need their solution in the first place!
You need to choose which kind of outcome you prefer in order to choose your aggregation mechanism
Is this really the case? It’s doesn’t seem true of axiomatic approaches to decision-theory in general, so is there a special reason to think it should be true here?
But if you could do that, you wouldn’t need their solution in the first place!
I guess I would view the parliamentary mechanism more as an intuition pump than a “solution” per se. It may well be that, having thought through it’s implications, it will turn out that the results can be represented in (say) the standard vNM framework. Nonetheless, the parliamentary model could still be helpful in getting a handle on the nature of the “utility” functions involved.
As an aside, it seems as though their parliamentary approach could potentially be modeled more effectively using co-operative game theory than the more standard non-cooperative version.
Is this really the case? It’s doesn’t seem true of axiomatic approaches to decision-theory in general, so is there a special reason to think it should be true here?
I just gave the reason. “Some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes.”
The aggregation mechanism is a lot like the thread priority system in a computer operating system. Some operating systems will always give the CPU to the highest-priority task. Some try to give tasks CPU time proportional to their priority. Likewise, some aggregation mechanisms will choose the most popular option; some choose options with probability proportional to their popularity, never giving any voice to minority opinions. You have to choose which type of aggregation mechanism to use. But this choice is exactly the sort of choice that the parliament is supposed to be producing as output, not requiring as input.
I wonder if it would work to renormalize utility so that the total of everything that’s “at stake” (in some sense that would need to be made more precise) is always worth the same?
Probably this gives too much weight to easy-to-achieve moralities, like the morality that says all that matters is whether you’re happy tomorrow? It also doesn’t accommodate non-consequentalist moralities.
But does it ever make sense to respond to new moral information by saying, “huh, I guess existence as a whole doesn’t matter as much as I thought it did”? It seems counterintuitive somehow.
I can’t follow your comment. I would need some inferential steps filled in, between the prior comment, and the first sentence of your comment, and between every sentence of your comment.
As I pointed out in that thread, their solution doesn’t work. You would need to choose an aggregation mechanism to combine votes. Different mechanisms will cause different systematic outcomes. Notably, some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes (much as, eg., the American system always chooses the most popular candidate, resulting in a 2-party system equilibrium; many European systems allocate seats proportionally to votes, allowing equilibria with more than 2 parties.)
You need to choose which kind of outcome you prefer in order to choose your aggregation mechanism, in order to implement their solution. But if you could do that, you wouldn’t need their solution in the first place!
Is this really the case? It’s doesn’t seem true of axiomatic approaches to decision-theory in general, so is there a special reason to think it should be true here?
I guess I would view the parliamentary mechanism more as an intuition pump than a “solution” per se. It may well be that, having thought through it’s implications, it will turn out that the results can be represented in (say) the standard vNM framework. Nonetheless, the parliamentary model could still be helpful in getting a handle on the nature of the “utility” functions involved.
As an aside, it seems as though their parliamentary approach could potentially be modeled more effectively using co-operative game theory than the more standard non-cooperative version.
I just gave the reason. “Some mechanisms will result in always choosing actions from one category; some mechanisms will result in sampling from different categories proportionally to their votes.”
The aggregation mechanism is a lot like the thread priority system in a computer operating system. Some operating systems will always give the CPU to the highest-priority task. Some try to give tasks CPU time proportional to their priority. Likewise, some aggregation mechanisms will choose the most popular option; some choose options with probability proportional to their popularity, never giving any voice to minority opinions. You have to choose which type of aggregation mechanism to use. But this choice is exactly the sort of choice that the parliament is supposed to be producing as output, not requiring as input.
I wonder if it would work to renormalize utility so that the total of everything that’s “at stake” (in some sense that would need to be made more precise) is always worth the same?
Probably this gives too much weight to easy-to-achieve moralities, like the morality that says all that matters is whether you’re happy tomorrow? It also doesn’t accommodate non-consequentalist moralities.
But does it ever make sense to respond to new moral information by saying, “huh, I guess existence as a whole doesn’t matter as much as I thought it did”? It seems counterintuitive somehow.
I can’t follow your comment. I would need some inferential steps filled in, between the prior comment, and the first sentence of your comment, and between every sentence of your comment.