A correct solution to moral uncertainty must not be dependent on cardinal utility and requires some rationality. So Borda rule doesn’t qualify. Parliamentary model approaches are more interesting because they rely on intelligent agents to do the work.
An example of a good approach is the market mechanism. You do not assume any cardinal utility. You actually do not do anything directly with the preferences you have a probability distribution over at all. You have an agent for each and extrapolate what that agent would do, if they had no uncertainty over their preferences and rationally pursued them, when put in a carefully designed environment that allows them to create arbitrary binding consensual precommitments (“contracts”) with other agents, and weights each agent’s influence over the outcomes that the agents care about according to your probabilities.
What is tricky is making the philosophical argument that this is indeed the solution to moral uncertainty that we are interested in. I’m not saying it is the correct solution. But it follows some insights that any correct solution should:
do not use cardinal utility, use partial orders;
do not do anything with the preferences yourself, you are at high risk of doing something incoherent;
use tools that are powerful and universal: intelligent agents, let them bargain using full Turing machines. You need strong properties, not (for example) mere Pareto efficiency.
I think I like this post, but not the approaches.
A correct solution to moral uncertainty must not be dependent on cardinal utility and requires some rationality. So Borda rule doesn’t qualify. Parliamentary model approaches are more interesting because they rely on intelligent agents to do the work.
An example of a good approach is the market mechanism. You do not assume any cardinal utility. You actually do not do anything directly with the preferences you have a probability distribution over at all. You have an agent for each and extrapolate what that agent would do, if they had no uncertainty over their preferences and rationally pursued them, when put in a carefully designed environment that allows them to create arbitrary binding consensual precommitments (“contracts”) with other agents, and weights each agent’s influence over the outcomes that the agents care about according to your probabilities.
What is tricky is making the philosophical argument that this is indeed the solution to moral uncertainty that we are interested in. I’m not saying it is the correct solution. But it follows some insights that any correct solution should:
do not use cardinal utility, use partial orders;
do not do anything with the preferences yourself, you are at high risk of doing something incoherent;
use tools that are powerful and universal: intelligent agents, let them bargain using full Turing machines. You need strong properties, not (for example) mere Pareto efficiency.