I’m quite convinced about how you analyze the problem of what morality is and how we should think about it, up until the point about how universally it applies. I’m just not sure that ‘humans different shards of god shatter’ add up to the same thing across people, a point that I think would become apparent as soon as you started to specify what the huge computation actually WAS.
I would think of the output as not being a yes/no answer, but something akin to ‘What percentage of human beings would agree that this was a good outcome, or be able to be thus convinced by some set of arguments?’. Some things, like saving a child’s life, would receive very widespread agreement. Others, like a global Islamic caliphate or widespread promiscuous sex would have more disagreement, including potentially disagreement that cannot be resolved by presenting any conceivable argument to the parties.
The question of ‘how much’ each person views something as moral comes into play as well. If different people can’t all be convinced of a particular outcome’s morality, the question ends up seeming remarkably similar to the question in economics of how to aggregate many people’s preferences for goods. Because you never observe preferences in total, you let everyone trade and express their desires through revealed preference to get a pareto solution. Here, a solution might be to assign them a certain amount of morality dollars to each outcome, let them spend as they wish, and add it all up. Like economics, there’s still the question of how to allocate the initial wealth (in this case, how much to weigh the opinions of each person).
I don’t know how much I’m distorting what you meant—it almost feels like we’ve just replaced ‘morality as preference’ with ‘morality as aggregate preference’, and I don’t think that’s what you had in mind.
I’m quite convinced about how you analyze the problem of what morality is and how we should think about it, up until the point about how universally it applies. I’m just not sure that ‘humans different shards of god shatter’ add up to the same thing across people, a point that I think would become apparent as soon as you started to specify what the huge computation actually WAS.
I would think of the output as not being a yes/no answer, but something akin to ‘What percentage of human beings would agree that this was a good outcome, or be able to be thus convinced by some set of arguments?’. Some things, like saving a child’s life, would receive very widespread agreement. Others, like a global Islamic caliphate or widespread promiscuous sex would have more disagreement, including potentially disagreement that cannot be resolved by presenting any conceivable argument to the parties.
The question of ‘how much’ each person views something as moral comes into play as well. If different people can’t all be convinced of a particular outcome’s morality, the question ends up seeming remarkably similar to the question in economics of how to aggregate many people’s preferences for goods. Because you never observe preferences in total, you let everyone trade and express their desires through revealed preference to get a pareto solution. Here, a solution might be to assign them a certain amount of morality dollars to each outcome, let them spend as they wish, and add it all up. Like economics, there’s still the question of how to allocate the initial wealth (in this case, how much to weigh the opinions of each person).
I don’t know how much I’m distorting what you meant—it almost feels like we’ve just replaced ‘morality as preference’ with ‘morality as aggregate preference’, and I don’t think that’s what you had in mind.