pragmatist, apologies if I gave the impression that by “impartially gives weight” I meant impartially gives equal weight. Thus the preferences of a cow or a pig or a human trump the conflicting interests of a less sentient Anopheles mosquito or a locust every time. But on the conception of rational agency I’m canvassing, it is neither epistemically nor instrumentally rational for an ideal agent to disregard a stronger preference simply because that stronger preference is entertained by a member of a another species or ethnic group. Nor is it epistemically or instrumentally rational for an ideal agent to disregard a conflicting stronger preference simply because her comparatively weaker preference looms larger in her own imagination. So on this analysis, Jane is not doing what “an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose.”
Rationality can be used toward any goal, including goals that don’t care about anyone’s preference. For example, there’s nothing in the math of utility maximisation that requires averaging over other agents’ preferences (note: do not confuse utility maximisation with utilitarianism, they are very different things, the former being a decision theory, the latter being a specific moral philosophy).
nshepperd, utilitarianism conceived as theory of value is not always carefully distinguished from utilitarianism—especially rule-utilitarianism—conceived as a decision procedure. This distinction is nicely brought out in the BPhil thesis of FHI’s Tony Ord, “Consequentialism and Decision Procedures”:
http://www.amirrorclear.net/academic/papers/decision-procedures.pdf
Toby takes a global utilitarian consequentialist approach to the question, ‘How should I decide what to do?” -
a subtly different question from ’”What should I do?”
pragmatist, apologies if I gave the impression that by “impartially gives weight” I meant impartially gives equal weight. Thus the preferences of a cow or a pig or a human trump the conflicting interests of a less sentient Anopheles mosquito or a locust every time. But on the conception of rational agency I’m canvassing, it is neither epistemically nor instrumentally rational for an ideal agent to disregard a stronger preference simply because that stronger preference is entertained by a member of a another species or ethnic group. Nor is it epistemically or instrumentally rational for an ideal agent to disregard a conflicting stronger preference simply because her comparatively weaker preference looms larger in her own imagination. So on this analysis, Jane is not doing what “an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose.”
Rationality can be used toward any goal, including goals that don’t care about anyone’s preference. For example, there’s nothing in the math of utility maximisation that requires averaging over other agents’ preferences (note: do not confuse utility maximisation with utilitarianism, they are very different things, the former being a decision theory, the latter being a specific moral philosophy).
nshepperd, utilitarianism conceived as theory of value is not always carefully distinguished from utilitarianism—especially rule-utilitarianism—conceived as a decision procedure. This distinction is nicely brought out in the BPhil thesis of FHI’s Tony Ord, “Consequentialism and Decision Procedures”: http://www.amirrorclear.net/academic/papers/decision-procedures.pdf Toby takes a global utilitarian consequentialist approach to the question, ‘How should I decide what to do?” - a subtly different question from ’”What should I do?”