I can see that you might question the usefulness of the notion of a “reason for action” as something over and above the notion of “ought”, but I don’t see a better case for thinking that “reason for action” is confused.
The main worry here seems to have to do with categorical reasons for action. Diagnostic question: are these more troubling/confused than categorical “ought” statements? If so, why?
Perhaps I should note that philosophers talking this way make a distinction between “motivating reasons” and “normative reasons”. A normative reason to do A is a good reason to do A, something that would help explain why you ought to do A, or something that counts in favor of doing A. A motivating reason just helps explain why someone did, in fact, do A. One of my motivating reasons for killing my mother might be to prevent her from being happy. By saying this, I do not suggest that this is a normative reason to kill my mother. It could also be that R would be a normative reason for me to A, but R does not motivate my to do A. (ata seems to assume otherwise, since ata is getting caught up with who these considerations would motivate. Whether reasons could work like this is a matter of philosophical controversy. Saying this more for others than you, Luke.)
Back to the main point, I am puzzled largely because the most natural ways of getting categorical oughts can get you categorical reasons. Example: simple total utilitarianism. On this view, R is a reason to do A if R is the fact that doing A would cause someone’s well-being to increase. The strength of R is the extent to which that person’s well-being increases. One weighs one’s reasons by adding up all of their strengths. On then does the thing that one has most reason to do. (It’s pretty clear in this case that the notion of a reason plays an inessential role in the theory. We can get by just fine with well-being, ought, causal notions, and addition.)
Utilitarianism, as always, is a simple case. But it seems like many categorical oughts can be thought of as being determined by weighing factors that count in favor of and count against the course of action in question. In these cases, we should be able to do something like what we did for util (though sometimes that method of weighing the reasons will be different/more complicated; in some bad cases, this might make the detour through reasons pointless).
The reasons framework seems a bit more natural in non-consequentialist cases. Imagine I try to maximize aggregate well-being, but I hate lying to do it. I might count the fact that an action would involve lying as a reason not to do it, but not believe that my lying makes the world worse. To get oughts out of a utility function instead, you might model my utility function as the result of adding up aggregate well-being and subtracting a factor that scales with the number of lies I would have to tell if I took the action in question. Again, it’s pretty clear that you don’t HAVE to think about things this way, but it is far from clear that this is confused/incoherent.
Perhaps the LW crowd is perplexed because people here take utility functions as primitive, whereas philosophers talking this way tend to take reasons as primitive and derive ought statements (and, on a very lucky day, utility functions) from them. This paper, which tries to help reasons folks and utility function folks understand/communicate with each other, might be helpful for anyone who cares much about this. My impression is that we clearly need utility functions, but don’t necessarily need the reason talk. The main advantage to getting up on the reason talk would be trying to understand philosophers who talk that way, if that’s important to you. (Much of the recent work in meta-ethics relies heavily on the notion of a normative reason, as I’m sure Luke knows.)
For the record, as a good old Humean I’m currently an internalist about reasons, which leaves me unable (I think) to endorse any form of utilitarianism, where utilitarianism is the view that we ought to maximize X. Why? Because internal reasons don’t always, and perhaps rarely, support maximizing X, and I don’t think external reasons for maximizing X exist. For example, I don’t think X has intrinsic value (in Korsgaard’s sense of “intrinsic value”).
Thanks for the link to that paper on rational choice theories and decision theories!
Categorical oughts and reasons have always confused me. What do you see as the difference, and which type of each are you thinking of? The types of categorical reasons or reasons with which I’m most familiar are Kant’s and Korsgaard’s.
R is a categorical reason for S to do A iff R counts in favor doing A for S, and would so count for other agents in a similar situation, regardless of their preferences. If it were true that we always have reasons to benefit others, regardless of what we care about, that would be a categorical reason. I don’t use the term “categorical reason” any differently than “external reason”.
S categorically ought to do A just when S ought to do A, regardless of what S cares about, and it would still be true that S ought to do A in similar situations, regardless of what S cares about. The rule: always maximize happiness, would, if true, ground a categorical ought.
I see very little reason to be more or less skeptical of categorical reasons or categorical oughts than the other.
Hard to be confident about these things, but I don’t see the problem with external reasons/oughts. Some people seem to have some kind of metaphysical worry...harder to reduce or something. I don’t see it.
I can see that you might question the usefulness of the notion of a “reason for action” as something over and above the notion of “ought”, but I don’t see a better case for thinking that “reason for action” is confused.
The main worry here seems to have to do with categorical reasons for action. Diagnostic question: are these more troubling/confused than categorical “ought” statements? If so, why?
Perhaps I should note that philosophers talking this way make a distinction between “motivating reasons” and “normative reasons”. A normative reason to do A is a good reason to do A, something that would help explain why you ought to do A, or something that counts in favor of doing A. A motivating reason just helps explain why someone did, in fact, do A. One of my motivating reasons for killing my mother might be to prevent her from being happy. By saying this, I do not suggest that this is a normative reason to kill my mother. It could also be that R would be a normative reason for me to A, but R does not motivate my to do A. (ata seems to assume otherwise, since ata is getting caught up with who these considerations would motivate. Whether reasons could work like this is a matter of philosophical controversy. Saying this more for others than you, Luke.)
Back to the main point, I am puzzled largely because the most natural ways of getting categorical oughts can get you categorical reasons. Example: simple total utilitarianism. On this view, R is a reason to do A if R is the fact that doing A would cause someone’s well-being to increase. The strength of R is the extent to which that person’s well-being increases. One weighs one’s reasons by adding up all of their strengths. On then does the thing that one has most reason to do. (It’s pretty clear in this case that the notion of a reason plays an inessential role in the theory. We can get by just fine with well-being, ought, causal notions, and addition.)
Utilitarianism, as always, is a simple case. But it seems like many categorical oughts can be thought of as being determined by weighing factors that count in favor of and count against the course of action in question. In these cases, we should be able to do something like what we did for util (though sometimes that method of weighing the reasons will be different/more complicated; in some bad cases, this might make the detour through reasons pointless).
The reasons framework seems a bit more natural in non-consequentialist cases. Imagine I try to maximize aggregate well-being, but I hate lying to do it. I might count the fact that an action would involve lying as a reason not to do it, but not believe that my lying makes the world worse. To get oughts out of a utility function instead, you might model my utility function as the result of adding up aggregate well-being and subtracting a factor that scales with the number of lies I would have to tell if I took the action in question. Again, it’s pretty clear that you don’t HAVE to think about things this way, but it is far from clear that this is confused/incoherent.
Perhaps the LW crowd is perplexed because people here take utility functions as primitive, whereas philosophers talking this way tend to take reasons as primitive and derive ought statements (and, on a very lucky day, utility functions) from them. This paper, which tries to help reasons folks and utility function folks understand/communicate with each other, might be helpful for anyone who cares much about this. My impression is that we clearly need utility functions, but don’t necessarily need the reason talk. The main advantage to getting up on the reason talk would be trying to understand philosophers who talk that way, if that’s important to you. (Much of the recent work in meta-ethics relies heavily on the notion of a normative reason, as I’m sure Luke knows.)
utilitymonster,
For the record, as a good old Humean I’m currently an internalist about reasons, which leaves me unable (I think) to endorse any form of utilitarianism, where utilitarianism is the view that we ought to maximize X. Why? Because internal reasons don’t always, and perhaps rarely, support maximizing X, and I don’t think external reasons for maximizing X exist. For example, I don’t think X has intrinsic value (in Korsgaard’s sense of “intrinsic value”).
Thanks for the link to that paper on rational choice theories and decision theories!
So are categorical reasons any worse off than categorical oughts?
Categorical oughts and reasons have always confused me. What do you see as the difference, and which type of each are you thinking of? The types of categorical reasons or reasons with which I’m most familiar are Kant’s and Korsgaard’s.
R is a categorical reason for S to do A iff R counts in favor doing A for S, and would so count for other agents in a similar situation, regardless of their preferences. If it were true that we always have reasons to benefit others, regardless of what we care about, that would be a categorical reason. I don’t use the term “categorical reason” any differently than “external reason”.
S categorically ought to do A just when S ought to do A, regardless of what S cares about, and it would still be true that S ought to do A in similar situations, regardless of what S cares about. The rule: always maximize happiness, would, if true, ground a categorical ought.
I see very little reason to be more or less skeptical of categorical reasons or categorical oughts than the other.
Agreed. And I’m skeptical of both. You?
Hard to be confident about these things, but I don’t see the problem with external reasons/oughts. Some people seem to have some kind of metaphysical worry...harder to reduce or something. I don’t see it.