What does the whole concept of talking about morality or human motivation using the terms of utilitarianism and consequentialism mean? It means restricting oneself to using the terms and rules that are used to derive new sentences using those terms that are used in the moral philosophy of utilitarianism and consequentialism. Once you restrict your vocabulary and the rules that are used to form sentences using this vocabulary, you usually restrict what conclusions you can derive using the terms that are in this vocabulary.
If you think in terms of consequentialism, what operations can you do? You can assign utilities to different world states (depending on the flavour of consequentialism you are using, you might have further restrictio how you can do this) and you can compare them. Or, in another version, you cannot assign the utilities directly, but you can impose a partial order binary relations on pairs of world states. That’s all. If you add something else, then you are no longer talking using just consequentialist terms. For example, take the trolley problem. Given the way the dilemma is usually described, there is not a lot of sentences that you can derive using consequentialist terms. The whole framing of the problem gives you just two world states and asks you to assign utilities to them.
Now, you can use the terms of consequentialist moral philosophy to talk about all human motivation. If your preferences satisfy certain axioms, then Von Neumann–Morgenstern utility theorem allows that. Let’s denote this way of thinking as (1).
Or you can use terms of consequentialist moral philosophy in much more restricted domain. Most people usually use those terms only to talk about things they consider to be related to morality (a question how some problems become discussed using the terms of moral philosophy and considered to be moral problems while other problems don’t is an interesting, but quite distinct question). When they talk about all human motivation, they use terms that come from outside the consequentialist moral philosophy. Let’s denote this way of thinking as (2).
Now, what do you use to describe all human motivation? Just the terms of consequentialist moral philosophy or other terms as well? Let’s compare.
It also appears to imply that donating all your money to charity beyond what you need to survive isn’t just admirable but morally obligatory.
and
But where does the “obligatory” part come in. I don’t really how its obvious what, if any, ethical obligations utilitarianism implies.
Now, I know very little about what kind of theory of morality and human motivation you or Chris Hallquist support. Therefore, my next paragraph is based on the impressions I got reading those two quotes.
I think that your confusion comes from the fact that you think that Chris Hallquist is using the terms of consequentialist moral philosophy in pretty much the same way you do. However it seems to me that Chris Hallquist is using them in (1) way (or close to that), whereas you are closer to the (2) way of thinking. And when you think about all human motivation, then you use various terms and concepts, some of which are not from the vocabulary of consequentialism.
The very fact that you can ask such question “But where does the “obligatory” part come in. I don’t really how its obvious what, if any, ethical obligations utilitarianism implies.” implies that you are using terms that come from outside of consequentialism, because remember:
in consequentialism you can only assign utilities to the world states and compare them, that’s all. The very fact that the idea that you can compare the utilities of two world states, find the utility of the world_state_1 is greater than the utility of world_state_2 and they disobey this comparison makes sense to you means that when thinking about human motivation you are using (perhaps implicitly) concepts that come from somewhere else than consequentialism [1]. There is no way you can derive disobedience using the operations of consequentialism. Therefore, if you use the terms of consequentialism to describe all human motivation ((1) way of thinking), it cannot not be obligatory. I think that Chris Hallquist is trying to implicitly convey this idea. Using (1) way of thinking (which I think Chris Hallquist is using), if your utility function assigns utilities to world states in such a way that the world states that are achievable only by donating a lot of money to charity (and not any other way) are preferable to other world states, then you are by definition motivated to donate as much money to charity as possible. Now, isn’t that a bit tautological? If you use terms such as utility function to describe all human motivation, why are such encouragements to donate to charity even needed? Wouldn’t you already be motivated to donate a lot of income to charity? I think that what a hypothetical utilitarian person who says such things (a hypothetical person whose ideas about utilitarianism Chris Hallquist is channeling) would be trying to do is to modify your de facto utility function (if we are using this term to describe and model all human motivation, assuming it’s possible) by appealing to what kind of de facto utility function you would like to have or you like to imagine yourself having. I.e. what would you like to be motivated by? The said hypothetical utilitarian person would like your motivation to be such that it could be modeled by utility function which assigns higher utilities to world states that (in this particular case) are achievable by donating a lot of money to charity.
[1] Of course, there is another possibility. That you talk about certain things using the terms such as utility function, and all your motivations (including, obviously, subconscious ones) can be modeled by a utility function, but those two are different, therefore impression of disobedience comes from the fact that the conclusions that can be modeled as derived using the second utility function are different from conclusions that you derive using the first one.
Many thoughtful people identity as utilitarian [...] yet do not think people have extreme obligations.
My impression is that most people who identify as utilitarians do not use terms of consequentialist moral philosophy to describe all their human motivation. They use them when they talk about problems and situations that are considered to be related to morality. For example, when they read about something and recognize it as a moral problem, they start using those terms. But their whole apparatus of human motivation (which may or may not be modeled as a utility function) is much larger than that and their utilitarianism (i.e. utility function as they are able to consciously think about it) doesn’t cover all of it, because that would be too difficult. The most you can say that they think about various situations and what should they do if they found themselves to be in them (e.g. what if you find yourself in a trolley dilemma, and others), precompute and cache the answers, and (if their memory, courage and willpower don’t fail them) perform those actions when those situations arise.
What does the whole concept of talking about morality or human motivation using the terms of utilitarianism and consequentialism mean? It means restricting oneself to using the terms and rules that are used to derive new sentences using those terms that are used in the moral philosophy of utilitarianism and consequentialism. Once you restrict your vocabulary and the rules that are used to form sentences using this vocabulary, you usually restrict what conclusions you can derive using the terms that are in this vocabulary.
If you think in terms of consequentialism, what operations can you do? You can assign utilities to different world states (depending on the flavour of consequentialism you are using, you might have further restrictio how you can do this) and you can compare them. Or, in another version, you cannot assign the utilities directly, but you can impose a partial order binary relations on pairs of world states. That’s all. If you add something else, then you are no longer talking using just consequentialist terms. For example, take the trolley problem. Given the way the dilemma is usually described, there is not a lot of sentences that you can derive using consequentialist terms. The whole framing of the problem gives you just two world states and asks you to assign utilities to them.
Now, you can use the terms of consequentialist moral philosophy to talk about all human motivation. If your preferences satisfy certain axioms, then Von Neumann–Morgenstern utility theorem allows that. Let’s denote this way of thinking as (1).
Or you can use terms of consequentialist moral philosophy in much more restricted domain. Most people usually use those terms only to talk about things they consider to be related to morality (a question how some problems become discussed using the terms of moral philosophy and considered to be moral problems while other problems don’t is an interesting, but quite distinct question). When they talk about all human motivation, they use terms that come from outside the consequentialist moral philosophy. Let’s denote this way of thinking as (2).
Now, what do you use to describe all human motivation? Just the terms of consequentialist moral philosophy or other terms as well? Let’s compare.
and
Now, I know very little about what kind of theory of morality and human motivation you or Chris Hallquist support. Therefore, my next paragraph is based on the impressions I got reading those two quotes.
I think that your confusion comes from the fact that you think that Chris Hallquist is using the terms of consequentialist moral philosophy in pretty much the same way you do. However it seems to me that Chris Hallquist is using them in (1) way (or close to that), whereas you are closer to the (2) way of thinking. And when you think about all human motivation, then you use various terms and concepts, some of which are not from the vocabulary of consequentialism.
The very fact that you can ask such question “But where does the “obligatory” part come in. I don’t really how its obvious what, if any, ethical obligations utilitarianism implies.” implies that you are using terms that come from outside of consequentialism, because remember: in consequentialism you can only assign utilities to the world states and compare them, that’s all. The very fact that the idea that you can compare the utilities of two world states, find the utility of the world_state_1 is greater than the utility of world_state_2 and they disobey this comparison makes sense to you means that when thinking about human motivation you are using (perhaps implicitly) concepts that come from somewhere else than consequentialism [1]. There is no way you can derive disobedience using the operations of consequentialism. Therefore, if you use the terms of consequentialism to describe all human motivation ((1) way of thinking), it cannot not be obligatory. I think that Chris Hallquist is trying to implicitly convey this idea. Using (1) way of thinking (which I think Chris Hallquist is using), if your utility function assigns utilities to world states in such a way that the world states that are achievable only by donating a lot of money to charity (and not any other way) are preferable to other world states, then you are by definition motivated to donate as much money to charity as possible. Now, isn’t that a bit tautological? If you use terms such as utility function to describe all human motivation, why are such encouragements to donate to charity even needed? Wouldn’t you already be motivated to donate a lot of income to charity? I think that what a hypothetical utilitarian person who says such things (a hypothetical person whose ideas about utilitarianism Chris Hallquist is channeling) would be trying to do is to modify your de facto utility function (if we are using this term to describe and model all human motivation, assuming it’s possible) by appealing to what kind of de facto utility function you would like to have or you like to imagine yourself having. I.e. what would you like to be motivated by? The said hypothetical utilitarian person would like your motivation to be such that it could be modeled by utility function which assigns higher utilities to world states that (in this particular case) are achievable by donating a lot of money to charity.
[1] Of course, there is another possibility. That you talk about certain things using the terms such as utility function, and all your motivations (including, obviously, subconscious ones) can be modeled by a utility function, but those two are different, therefore impression of disobedience comes from the fact that the conclusions that can be modeled as derived using the second utility function are different from conclusions that you derive using the first one.
My impression is that most people who identify as utilitarians do not use terms of consequentialist moral philosophy to describe all their human motivation. They use them when they talk about problems and situations that are considered to be related to morality. For example, when they read about something and recognize it as a moral problem, they start using those terms. But their whole apparatus of human motivation (which may or may not be modeled as a utility function) is much larger than that and their utilitarianism (i.e. utility function as they are able to consciously think about it) doesn’t cover all of it, because that would be too difficult. The most you can say that they think about various situations and what should they do if they found themselves to be in them (e.g. what if you find yourself in a trolley dilemma, and others), precompute and cache the answers, and (if their memory, courage and willpower don’t fail them) perform those actions when those situations arise.