I thought that both utilitarians and consequentialists would push someone onto train tracks. Since he drew a distinction between the two I was wondering what it was.
I thought that both utilitarians and consequentialists would push someone onto train tracks. Since he drew a distinction between the two I was wondering what it was.
Yes, they will. Consequentialists are a superset of utilitarians. Not all consequentialists are utilitarians. For example, one could be a consequentialist who will choose toture over dustspecks, but a utilitarian will not.
Consequentialism decides between actions only by reducing them to expected outcomes (or probability distributions over outcomes), and comparing those outcomes. Utilitarianism is consequentialist, but with additional structure to how it compares outcomes. In particular, utilitarians combine uncertain outcomes by weighting them linearly with weights proportional to probability. Additionally, many (but not all) utilitarians also subdivide their utility functions by agent, specifying that individuals’ preferences are to be quantified and linearly combined.
Hm, that’s not how I was breaking it down. We really haven’t standardized on terminology here, have we? That would be a useful thing.
Here’s how I was using the terms:
Consequentialist—Morality/shouldness refers to a preference ordering over the set of possible universe-histories, not e.g. a set of social rules; consequences must always be considered in determining what is right. The ends (considered in totality, obviously) justify the means, or as Eliezer put it, “Shouldness flows backwards”. This description may be a bit too inclusive, but it still excludes people who say “You don’t push people onto the train tracks no matter what.”
Unnamed category—agents whose preferences are described by utility functions. This needn’t have much to do with morality at all, really, but an agent that was both rational and truly consequentialist would necessarily fall into this category, as otherwise it would (if I’m not mistaken) be vulnerable to Dutch bookings.
Utilitarian—Someone who not only uses a utility function, but uses one that assigns some number to each person P (possibilities include some measure of “net happiness”, or something based on P’s utility function… even though raw numbers from utility functions are meaningless...) and then computes utility from combining these numbers in some way that treats all people symmetrically (e.g. summing or averaging.)
Consequentialists still have to have an underlying “feel” for morality outside of consequentialism. That is, they need to have some preference ordering that is not itself consequentialist in nature be it social Darwinism, extreme nationalism, or whatever other grouping it may be.
Utilitarianism is a subset of consequentialism that gives as it preference ordering the over all happiness of society.
Yes, consequentialism is a criterion a system can satisfy, not a system in and of itself. Your definition of utilitarianism is too narrow, though, in that it seems to only include “classical utilitarianism”, and not e.g. preference utilitarianism.
I’m not sure that the distinctions are precise. As I understand it, a utilitarian assigns everyone a utility and then just takes the sum, and sees how to maximize that. A consequentialist might not calculate their end goals in that way even as they are willing to take any relevant actions to move the world into a state they consider better.
I thought that both utilitarians and consequentialists would push someone onto train tracks. Since he drew a distinction between the two I was wondering what it was.
Yes, they will. Consequentialists are a superset of utilitarians. Not all consequentialists are utilitarians. For example, one could be a consequentialist who will choose toture over dustspecks, but a utilitarian will not.
So what exactly is the difference?
Consequentialism decides between actions only by reducing them to expected outcomes (or probability distributions over outcomes), and comparing those outcomes. Utilitarianism is consequentialist, but with additional structure to how it compares outcomes. In particular, utilitarians combine uncertain outcomes by weighting them linearly with weights proportional to probability. Additionally, many (but not all) utilitarians also subdivide their utility functions by agent, specifying that individuals’ preferences are to be quantified and linearly combined.
Hm, that’s not how I was breaking it down. We really haven’t standardized on terminology here, have we? That would be a useful thing.
Here’s how I was using the terms:
Consequentialist—Morality/shouldness refers to a preference ordering over the set of possible universe-histories, not e.g. a set of social rules; consequences must always be considered in determining what is right. The ends (considered in totality, obviously) justify the means, or as Eliezer put it, “Shouldness flows backwards”. This description may be a bit too inclusive, but it still excludes people who say “You don’t push people onto the train tracks no matter what.”
Unnamed category—agents whose preferences are described by utility functions. This needn’t have much to do with morality at all, really, but an agent that was both rational and truly consequentialist would necessarily fall into this category, as otherwise it would (if I’m not mistaken) be vulnerable to Dutch bookings.
Utilitarian—Someone who not only uses a utility function, but uses one that assigns some number to each person P (possibilities include some measure of “net happiness”, or something based on P’s utility function… even though raw numbers from utility functions are meaningless...) and then computes utility from combining these numbers in some way that treats all people symmetrically (e.g. summing or averaging.)
Consequentialists still have to have an underlying “feel” for morality outside of consequentialism. That is, they need to have some preference ordering that is not itself consequentialist in nature be it social Darwinism, extreme nationalism, or whatever other grouping it may be.
Utilitarianism is a subset of consequentialism that gives as it preference ordering the over all happiness of society.
Yes, consequentialism is a criterion a system can satisfy, not a system in and of itself. Your definition of utilitarianism is too narrow, though, in that it seems to only include “classical utilitarianism”, and not e.g. preference utilitarianism.
I’m not sure that the distinctions are precise. As I understand it, a utilitarian assigns everyone a utility and then just takes the sum, and sees how to maximize that. A consequentialist might not calculate their end goals in that way even as they are willing to take any relevant actions to move the world into a state they consider better.