Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.