Choosing those goals is not something that rationality can help much with—the best it can do is try to identify where goals are not internally consistent.
It often turns out that generating consistent decision rules can be harder than one might expect. Hence the plethora of “impossibility theorems” in social choice theory. (Many of these, like Arrow’s arise when people try to rule out interpersonal utility comparisons, but there are a number that bite even when such comparisons are allowed, e.g. in population ethics.)
Yeah, expecting to achieve consistency is probably too much too ask but recognizing conflicts at least allows you to make a conscious choice about priorities.
It often turns out that generating consistent decision rules can be harder than one might expect. Hence the plethora of “impossibility theorems” in social choice theory. (Many of these, like Arrow’s arise when people try to rule out interpersonal utility comparisons, but there are a number that bite even when such comparisons are allowed, e.g. in population ethics.)
Yeah, expecting to achieve consistency is probably too much too ask but recognizing conflicts at least allows you to make a conscious choice about priorities.