I guess this allows that they can still have very different goals, since they ought to be able to coordinate if they have identical utility functions, i.e. they rank outcomes and prospects identically (although I guess there’s still a question of differences in epistemic states causing failures to coordinate?). Something like maximize total hedonistic utility can be coordinated on if everyone adopted that. But that’s of course a much less general case than arbitrary and differing preferences.
Also, is the result closer to peference utilitarianism or contractualism than deontology? Couldn’t you treat others as mere means, as long as their interests are outweighed by others’ (whether or not you’re aggregating)? So, you would still get the consequentialist judgements in various thought experiments. Never treating others as mere means seems like it’s a rule that’s too risk-averse or ambiguity-averse or loss-averse about a very specific kind of risk or cause of harm that’s singled out (being treated as a mere means), at possibly significant average opportunity cost.
Maybe some aversion can be justified because of differences in empirical beliefs and to reduce risks from motivated reasoning, and typical mind fallacy or paternalism, leading to kinds of tragedies of the commons, e.g. everyone exploiting one another mistakenly believing it’s in people’s best interests overall but it’s not, so people are made worse off overall. And if people are more averse to exploiting or otherwise harming others, they’re more trustworthy and cooperation is easier.
But, there are very probably cases where very minor exploitation for very significant benefits (including preventing very significant harms) would be worth it.
Agreed. I haven’t worked out the details but I imagine the long-run ideal competitive decision apps would resemble Kantianism and resemble preference-rule-utilitarianism, but be importantly different from each. Idk. I’d love for someone to work out the details!
Interesting!
I guess this allows that they can still have very different goals, since they ought to be able to coordinate if they have identical utility functions, i.e. they rank outcomes and prospects identically (although I guess there’s still a question of differences in epistemic states causing failures to coordinate?). Something like maximize total hedonistic utility can be coordinated on if everyone adopted that. But that’s of course a much less general case than arbitrary and differing preferences.
Also, is the result closer to peference utilitarianism or contractualism than deontology? Couldn’t you treat others as mere means, as long as their interests are outweighed by others’ (whether or not you’re aggregating)? So, you would still get the consequentialist judgements in various thought experiments. Never treating others as mere means seems like it’s a rule that’s too risk-averse or ambiguity-averse or loss-averse about a very specific kind of risk or cause of harm that’s singled out (being treated as a mere means), at possibly significant average opportunity cost.
Maybe some aversion can be justified because of differences in empirical beliefs and to reduce risks from motivated reasoning, and typical mind fallacy or paternalism, leading to kinds of tragedies of the commons, e.g. everyone exploiting one another mistakenly believing it’s in people’s best interests overall but it’s not, so people are made worse off overall. And if people are more averse to exploiting or otherwise harming others, they’re more trustworthy and cooperation is easier.
But, there are very probably cases where very minor exploitation for very significant benefits (including preventing very significant harms) would be worth it.
Agreed. I haven’t worked out the details but I imagine the long-run ideal competitive decision apps would resemble Kantianism and resemble preference-rule-utilitarianism, but be importantly different from each. Idk. I’d love for someone to work out the details!