SilasBarta has half the answer, which is that public punishment of criminal A is excellent for deterring law-abiding citizen B from committing crimes.
The second half of the answer is that most people believe in justice. Justice makes little sense from a Utililitarian perspective (except that public justice deters others), but it is a commonly held belief that bad deeds actually do deserve punishment regardless of the utilitarian function involved. The belief exists not only in most societies, but also among most intelligent animals (particularly primates). Now, a utilitarian may want to discard this evolutionary baggage, but to do so will be very politically difficult.
Azathoth = Theomorphed evolution. The evolutionary cause of our sense of justice, hunger for revenge etc presumably is in large part the increased fitness due to deterrence.
To clarify: My point was that the crucial aspect is not that people observe a punishment, and then infer that they should not commit crimes later. Rather, the important thing is that people, to the extent that they correctly model the rest of society and its response to their crimes, get an “internal simulation” that outputs “they will inflict disutility on you even if it’s expensive to do so, and even knowing that it failed to deter you”. And this model can only be correct and have this character if people really do punish in the crime-instances.
In other words, to the extent that people require punishments to deter, they only require subjunctive, not causal deterrence—though obviously the latter is factored in.
The second half of the answer is that most people believe in justice.
That’s what I was referring to as the “otherwise-ungrounded deservedness of others [who defect] of being treated badly”—this internalization of subjunctive (and acausal) criteria feels like a desire for “justice” or “what is right” on the inside. In other words, we have causal reasons for doing things, and separate from that, we have acausal criteria that often conflict, and when the acausals outweigh, we get the feeling of, “this person should be punished, even if expensive, and even though the crime has already happened”.
Rather, the important thing is that people, to the extent that they correctly model the rest of society and its response to their crimes, get an “internal simulation” that outputs “they will inflict disutility on you even if it’s expensive to do so, and even knowing that it failed to deter you”.
Another thought-experiment to heighten the distinction: if the President went on TV and said that starting this year, refusing to pay taxes would no longer be a crime, then the deterrence effect of having put people in jail for tax evasion would evaporate overnight. Every punishment would still have happened, but they would no longer deter future acts of the same kind.
Well said, but do we then conclude that we do actually value justice in itself? Or do we conclude that we value justice instrumentally? Yes, evolution designed us to care about justice for subjunctive deterrence reasons, but so what? Evolution designed all of our values instrumentally for all sorts of purposes that we may or may not care about. But that doesn’t mean we have no values. I have no idea how to answer this question, and am at a loss for how to determine whether a perceived value is terminal or instrumental, in general.
My article, “Morality as Parfitian-filtered decision theory?”, was devoted to exactly that question, and my conclusion is that justice—or at least the feeling that drives us to pursuit-of-justice actions—is an instrumental value, even though such actions cause harm to our terminal values. This is because theories that attempt to explain such “self-sacrificial” actions by positing justice (or morality, etc.) as a separate term in the agent’s utility function add complexity without corresponding explanatory power.
I skimmed the article. First, good idea. I would never have thought of that. But I do think there is a flaw. Given evolution, we would expect humans to have fairly complex utility functions and not simple utility functions. The complexity penalty for evolution + simple utility function could actually be higher than that of evolution + complicated utility function, depending on precisely how complex the simple and complicated utility functions are. For example, I assert that the complexity penalty for [evolution + a utility function with only one value (e.g. paper clips or happiness)] is higher than the complexity penalty for [evolution + any reasonable approximation to our current values].
This is only to say that a more complicated utility function for an evolved agent doesn’t necessarily imply a high complexity penalty. You could still be right in this particular case, but I’m not sure without actually being able to evaluate the relevant complexity penalties.
I think the phrase “otherwise-ungrounded” is likely a mistake. People (and animals) conflate justice in the sense you describe of “set of subjunctive criteria” as well as justice in the folk sense of “these are the things which are a priori wrong and deserve punishment regardless of one’s society”. Most useful descriptions of justice need to combine and conflate these two (among other) senses into a coherent whole. Without such a combination phrases like “unjust law” become difficult to explain.
I agree. You made this comment while I was revising the post to talk about justice as fairness. You might want to read it again, and see the Rawls link.
One of the directions I want to go in with this, is to explore the idea that the utilitarian may want to discard this evolutionary baggage. It seems appealing, doesn’t it? But this “baggage” is exactly the same kind of baggage as enjoying sex, which we evolved in order to reproduce. The question is: Why is enjoying sex a value we want to keep, while enjoying punishing is a value we don’t want to keep?
The question is: Why is enjoying sex a value we want to keep, while enjoying punishing is a value we don’t want to keep?
I’m not sure who “we” is in that sentence… for example, I suspect a lot of people don’t endorse enjoying sex and do endorse enjoying punishment… so I’ll speak for myself here.
Value is complex. One of the components to the value equation for me seems to have to do with wanting other people to do what they want to do. Another has to do with wanting sex. A third has to do with wanting rule-violators to be punished.
Call those Va, Vs, and Vp, respectively.
Mutually consensual sex satisfies Va and Vs and is neutral with respect to Vp.
Nonconsensual sex satisfies Vs and antisatisfies Va and Vp. (That is, it causes other people to do things they actively don’t want to do, and it inflicts punishment on people who didn’t break any rules.)
Punishing criminals satisfies Vp, antisatisfies Va, and is neutral with respect to Vs.
Of course, all of the above are sweeping generalizations and have many many exceptions. Again, I’m talking about myself here. And there are many many many more components to value, and they all enter into this calculation. But adopting a toy world for a second and assuming that these are the only components and they are equally weighted and binary, then it follows that:
Mutually consensual sex = (Va + Vs) >
Punishing criminals = (Vp—Va) >
Nonconsensual sex = (Vs—Va -Vp)
Further, having values in conflict is uncomfortable.
So I’d expect that I’d find it easier to endorse the first of those (where all the value components are positive or neutral), and that I’d be more willing to edit my value system to resolve the latter two. Or to edit my environment so that the conflict-causing situations don’t arise.
Or, as is far more common in the real world, edit my perceptions of my environment so that I’m not aware of those conflicts. For example, if I convince myself that someone I raped did something to deserve it, then nonconsensual sex becomes (Vs + Vp—Va). If I further convince myself that they aren’t really people, then it becomes (Vs + Vp). That’s much more comfortable.
Or I can convince myself that they actually wanted it, in which case it becomes (Vs + Va), which is also more comfortable.
Etc.
So I think my answer to your question is: some people experience more conflict in their value systems around specific instances of punishment than around specific instances of sex; I expect those people to want to keep the value of sex more than they want to keep the value of punishment. For other people, the reverse is true; I expect them to want the reverse.
Why is enjoying sex a value we want to keep, while enjoying punishing is a value we don’t want to keep?
I upvoted Eugine_Nier’s reply, but I think in current mainstream western society it’s because enjoying sex superficially looks like net positive (though it may in some cases be net negative), while enjoying punishing looks net negative. And historically there certainly is no shortage of groups who tried to remove their valuing sex, I wouldn’t be surprised if the total number of people who tried that exceeded the total number of people who tried to make themselves not value punishment.
Why is enjoying sex a value we want to keep, while enjoying punishing is a value we don’t want to keep?
The reason it seems that way, is that in our current culture, enjoying sex is considered high status, whereas enjoying punishment is considered low status.
Edit: Now that I think about it, my main point is that enjoying punishment being bad, while enjoying sex being ok are cultural values. Other cultures take different positions on these issues, and I don’t want to presuppose that our culture is necessarily correct here.
While Eugine’s comment could benefit from some clarification and refinement, I think you’re wrong to dismiss it as worthless. The status (or whatever similar term you want to use) assigned to enjoyment of sex versus enjoyment of punishment is definitely a culture-specific thing. (I can easily imagine a culture that, unlike ours, extols sexual asceticism along with righteous cruelty, and some historical examples aren’t too far off.) This certainly influences the choice of values that people would like to see perpetuated.
In this case it’s quite plausible though: People who advocate less sex or not enjoying sex and aren’t conservatives/religious/anti-pleasure in general expose themselves to the sour grapes explanation of their position and the low status implications of that. This is obvious enough that the Simpsons did it.
People who advocate less sex or not enjoying sex and aren’t conservatives/religious/anti-pleasure in general expose themselves to the sour grapes explanation of their position and the low status implications of that.
Careful about arguments that prove too much. My point is precisely that advocating not enjoying sex being low status is not universal in all cultures.
I’m quite aware of that. Anti-sex ideologies usually usually are either purity ideologies (conservative) or anti-pleasure in general, though. Some forms of feminism are an exception, but for women the status-sex relation is completely different anyway.
It’s plausible that this happens; but you can’t use it to explain itself. “Enjoying sex is high-status because it is high-status” doesn’t explain anything. Hence Eugine’s edit.
The super-parent is attempting to provide a reason why enjoying (not having) sex would be high-status, while enjoying punishing would be low status. Having sex is naturally high-status; but punishing people is equally high-status.
I’m not aware of the Simpsons ever making a joke about a character regulating punishment because they were too much of a loser to get in on everyone else’s punishment fun. The jump from “I’m against enjoying punishment” to “I’m too low status to get to punish people” is for whatever reason a lot longer than the equivalent for sex. It’s not at all obvious that this is due to the status differences themselves and not due to say logistics or some other non-status reason. Note that I’m not actually sharing Eugine_Nier’s position, I’m just defending it as at the very least not obviously useless.
I’m not aware of the Simpsons ever making a joke about a character regulating punishment because they were too much of a loser to get in on everyone else’s punishment fun.
I’m not sure what this is supposed to show besides the fact that the Simpsons reflects the culture that produced it.
Note that I’m not actually sharing Eugine_Nier’s position, I’m just defending it as at the very least not obviously useless.
I’d like to know what you think my position is, since in the above discussion I’ve found PhilGoetz’s posts to be closer to my position then your posts.
I’m not sure what this is supposed to show besides the fact that the Simpsons reflects the culture that produced it.
It wasn’t meant to show anything beyond tracing back the asymmetry concerning the two items in the culture that produced the Simpsons to a point where they might possibly be connected to non-status causes such as logistics.
I’d like to know what you think my position is, since in the above discussion I’ve found PhilGoetz’s posts to be closer to my position then your posts.
That the restatement of the observation as “in our current culture, enjoying sex is considered high status, whereas enjoying punishment is considered low status” would contribute to finding the mechanism that causes the observation instead of being the type of word magic Eliezer saw it as, which implies that status mechanisms are part of the of the causal chain.
I am aware that you disagree with the particulars of the causal chain elements I suggested, but you haven’t proposed an alternative yet. If the restatement wasn’t meant to help explain I’ll retract my upvote.
That the restatement of the observation as “in our current culture, enjoying sex is considered high status, whereas enjoying punishment is considered low status” would contribute to finding the mechanism that causes the observation instead of being the type of word magic Eliezer saw it as, which implies that status mechanisms are part of the of the causal chain.
I interpreted Eliezer’s comment as meaning that “high status” and “low status” are approximately synonymous with the things they’re being invoked to explain. (Or, at least, that no special motivation was given to expect status to play a role. It’s a reasonable heuristic for every social behavior to say, “Look for an explanation involving status”—but that also means it does not explain anything to say that; it’s the default assumption.)
That’s pretty much what I meant with “restatement” and “word magic”. As for default assumptions, that’s true if you don’t require the word status to do useful work compared to an equivalent explanation without that word, but “the word “status” will do useful work here” would be a productive statement to make if true.
Justice makes little sense from a Utililitarian perspective (except that public justice deters others), but it is a commonly held belief that bad deeds actually do deserve punishment regardless of the utilitarian function involved.
It doesn’t make sense from the utilitarian perspective of an omnipotent God centrally planning everything. However, in a situation where individual actors have some degree of autonomy so that higher-order game-theoretic effects are relevant, and where nobody has perfect information and foresight, it makes plenty of sense even for a strict utilitarian.
(This is one of the main reasons why I see no value in utilitarianism, since attempts to apply it that don’t reduce to other approaches like virtue ethics almost inevitably end up detached from reality.)
I would say, rather: Approaches like virtue ethics are collections of heuristics that can be used to verify a utilitarian ethics. A utilitarian approach that does not match up with these collections, does not conform to any common human ethical system.
That doesn’t make virtue ethics superior. It’s more observational (empirical), while a utilitarian approach can be more theoretical. A utilitarian approach has more coverage. Anyone trying to deal with the future will find utilitarian approaches more valuable than will people studying the past.
(The human ethical system is attached to reality—but is it attached in a normative way?)
I would say, rather: Approaches like virtue ethics are collections of heuristics that can be used to verify a utilitarian ethics. A utilitarian approach that does not match up with these collections, does not conform to any common human ethical system.
There is another important point: a utilitarian may conclude that given imperfect information and limited ability to predict the consequences of acts reliably, it’s usually impossible to do meaningful explicit utility calculations, so that the best way to guide one’s action are heuristics such as virtue ethics (which corresponds closely with the intuitive folk ethics that everyone uses in real life anyway). I had this in mind when I wrote about utilitarianism “reducing” (probably not a good choice of word) to other ethical systems.
(The human ethical system is attached to reality—but is it attached in a normative way?)
I’m not sure what exactly you mean by “attached in a normative way” here.
This might be a misunderstanding then. What I meant by “detached from reality” is that attempts to do explicit utilitarian calculations for practical problems almost invariably end up working with unrealistic models and thus producing worthless and misguided conclusions, even if we agree that theoretical premises of utilitarianism are sound (not that I do). In contrast, the regular folk virtue ethics does produce workable guidelines in practice, and in this regard it is attached to reality. But what does the qualifier “attached [to reality] in a normative way” add to that observation?
SilasBarta has half the answer, which is that public punishment of criminal A is excellent for deterring law-abiding citizen B from committing crimes.
The second half of the answer is that most people believe in justice. Justice makes little sense from a Utililitarian perspective (except that public justice deters others), but it is a commonly held belief that bad deeds actually do deserve punishment regardless of the utilitarian function involved. The belief exists not only in most societies, but also among most intelligent animals (particularly primates). Now, a utilitarian may want to discard this evolutionary baggage, but to do so will be very politically difficult.
That’s just Azathoth trying to do the same thing.
Reading the wikipedia entry on Azathoth didn’t help me figure out what you mean to say.
http://lesswrong.com/lw/kr/an_alien_god/
Still no clue. Third time’s the charm?
Azathoth = Theomorphed evolution. The evolutionary cause of our sense of justice, hunger for revenge etc presumably is in large part the increased fitness due to deterrence.
To clarify: My point was that the crucial aspect is not that people observe a punishment, and then infer that they should not commit crimes later. Rather, the important thing is that people, to the extent that they correctly model the rest of society and its response to their crimes, get an “internal simulation” that outputs “they will inflict disutility on you even if it’s expensive to do so, and even knowing that it failed to deter you”. And this model can only be correct and have this character if people really do punish in the crime-instances.
In other words, to the extent that people require punishments to deter, they only require subjunctive, not causal deterrence—though obviously the latter is factored in.
That’s what I was referring to as the “otherwise-ungrounded deservedness of others [who defect] of being treated badly”—this internalization of subjunctive (and acausal) criteria feels like a desire for “justice” or “what is right” on the inside. In other words, we have causal reasons for doing things, and separate from that, we have acausal criteria that often conflict, and when the acausals outweigh, we get the feeling of, “this person should be punished, even if expensive, and even though the crime has already happened”.
That is generally called the “sense of justice”.
Another thought-experiment to heighten the distinction: if the President went on TV and said that starting this year, refusing to pay taxes would no longer be a crime, then the deterrence effect of having put people in jail for tax evasion would evaporate overnight. Every punishment would still have happened, but they would no longer deter future acts of the same kind.
Well said, but do we then conclude that we do actually value justice in itself? Or do we conclude that we value justice instrumentally? Yes, evolution designed us to care about justice for subjunctive deterrence reasons, but so what? Evolution designed all of our values instrumentally for all sorts of purposes that we may or may not care about. But that doesn’t mean we have no values. I have no idea how to answer this question, and am at a loss for how to determine whether a perceived value is terminal or instrumental, in general.
My article, “Morality as Parfitian-filtered decision theory?”, was devoted to exactly that question, and my conclusion is that justice—or at least the feeling that drives us to pursuit-of-justice actions—is an instrumental value, even though such actions cause harm to our terminal values. This is because theories that attempt to explain such “self-sacrificial” actions by positing justice (or morality, etc.) as a separate term in the agent’s utility function add complexity without corresponding explanatory power.
I skimmed the article. First, good idea. I would never have thought of that. But I do think there is a flaw. Given evolution, we would expect humans to have fairly complex utility functions and not simple utility functions. The complexity penalty for evolution + simple utility function could actually be higher than that of evolution + complicated utility function, depending on precisely how complex the simple and complicated utility functions are. For example, I assert that the complexity penalty for [evolution + a utility function with only one value (e.g. paper clips or happiness)] is higher than the complexity penalty for [evolution + any reasonable approximation to our current values].
This is only to say that a more complicated utility function for an evolved agent doesn’t necessarily imply a high complexity penalty. You could still be right in this particular case, but I’m not sure without actually being able to evaluate the relevant complexity penalties.
That’s a good point, and I’ll have to think about it.
I think the phrase “otherwise-ungrounded” is likely a mistake. People (and animals) conflate justice in the sense you describe of “set of subjunctive criteria” as well as justice in the folk sense of “these are the things which are a priori wrong and deserve punishment regardless of one’s society”. Most useful descriptions of justice need to combine and conflate these two (among other) senses into a coherent whole. Without such a combination phrases like “unjust law” become difficult to explain.
“Otherwise-ungrounded” is not the same as “ungrounded”; it’s just that it’s not grounded in a specific benefit that “treating defectors badly” causes.
I agree. You made this comment while I was revising the post to talk about justice as fairness. You might want to read it again, and see the Rawls link.
One of the directions I want to go in with this, is to explore the idea that the utilitarian may want to discard this evolutionary baggage. It seems appealing, doesn’t it? But this “baggage” is exactly the same kind of baggage as enjoying sex, which we evolved in order to reproduce. The question is: Why is enjoying sex a value we want to keep, while enjoying punishing is a value we don’t want to keep?
I’m not sure who “we” is in that sentence… for example, I suspect a lot of people don’t endorse enjoying sex and do endorse enjoying punishment… so I’ll speak for myself here.
Value is complex. One of the components to the value equation for me seems to have to do with wanting other people to do what they want to do. Another has to do with wanting sex. A third has to do with wanting rule-violators to be punished.
Call those Va, Vs, and Vp, respectively.
Mutually consensual sex satisfies Va and Vs and is neutral with respect to Vp.
Nonconsensual sex satisfies Vs and antisatisfies Va and Vp. (That is, it causes other people to do things they actively don’t want to do, and it inflicts punishment on people who didn’t break any rules.)
Punishing criminals satisfies Vp, antisatisfies Va, and is neutral with respect to Vs.
Of course, all of the above are sweeping generalizations and have many many exceptions. Again, I’m talking about myself here. And there are many many many more components to value, and they all enter into this calculation. But adopting a toy world for a second and assuming that these are the only components and they are equally weighted and binary, then it follows that:
Mutually consensual sex = (Va + Vs) > Punishing criminals = (Vp—Va) > Nonconsensual sex = (Vs—Va -Vp)
Further, having values in conflict is uncomfortable.
So I’d expect that I’d find it easier to endorse the first of those (where all the value components are positive or neutral), and that I’d be more willing to edit my value system to resolve the latter two. Or to edit my environment so that the conflict-causing situations don’t arise.
Or, as is far more common in the real world, edit my perceptions of my environment so that I’m not aware of those conflicts. For example, if I convince myself that someone I raped did something to deserve it, then nonconsensual sex becomes (Vs + Vp—Va). If I further convince myself that they aren’t really people, then it becomes (Vs + Vp). That’s much more comfortable.
Or I can convince myself that they actually wanted it, in which case it becomes (Vs + Va), which is also more comfortable.
Etc.
So I think my answer to your question is: some people experience more conflict in their value systems around specific instances of punishment than around specific instances of sex; I expect those people to want to keep the value of sex more than they want to keep the value of punishment. For other people, the reverse is true; I expect them to want the reverse.
I upvoted Eugine_Nier’s reply, but I think in current mainstream western society it’s because enjoying sex superficially looks like net positive (though it may in some cases be net negative), while enjoying punishing looks net negative. And historically there certainly is no shortage of groups who tried to remove their valuing sex, I wouldn’t be surprised if the total number of people who tried that exceeded the total number of people who tried to make themselves not value punishment.
The reason it seems that way, is that in our current culture, enjoying sex is considered high status, whereas enjoying punishment is considered low status.
Edit: Now that I think about it, my main point is that enjoying punishment being bad, while enjoying sex being ok are cultural values. Other cultures take different positions on these issues, and I don’t want to presuppose that our culture is necessarily correct here.
I think this is what people are complaining about when they complain that “status” is being used isomorphically to “magic”.
While Eugine’s comment could benefit from some clarification and refinement, I think you’re wrong to dismiss it as worthless. The status (or whatever similar term you want to use) assigned to enjoyment of sex versus enjoyment of punishment is definitely a culture-specific thing. (I can easily imagine a culture that, unlike ours, extols sexual asceticism along with righteous cruelty, and some historical examples aren’t too far off.) This certainly influences the choice of values that people would like to see perpetuated.
You just deconstructed a large part of my next post before I made it.
In this case it’s quite plausible though: People who advocate less sex or not enjoying sex and aren’t conservatives/religious/anti-pleasure in general expose themselves to the sour grapes explanation of their position and the low status implications of that. This is obvious enough that the Simpsons did it.
Careful about arguments that prove too much. My point is precisely that advocating not enjoying sex being low status is not universal in all cultures.
I’m quite aware of that. Anti-sex ideologies usually usually are either purity ideologies (conservative) or anti-pleasure in general, though. Some forms of feminism are an exception, but for women the status-sex relation is completely different anyway.
It’s plausible that this happens; but you can’t use it to explain itself. “Enjoying sex is high-status because it is high-status” doesn’t explain anything. Hence Eugine’s edit.
Not being able to get sex being low status doesn’t require much in the way of explanation.
The super-parent is attempting to provide a reason why enjoying (not having) sex would be high-status, while enjoying punishing would be low status. Having sex is naturally high-status; but punishing people is equally high-status.
I’m not aware of the Simpsons ever making a joke about a character regulating punishment because they were too much of a loser to get in on everyone else’s punishment fun. The jump from “I’m against enjoying punishment” to “I’m too low status to get to punish people” is for whatever reason a lot longer than the equivalent for sex. It’s not at all obvious that this is due to the status differences themselves and not due to say logistics or some other non-status reason. Note that I’m not actually sharing Eugine_Nier’s position, I’m just defending it as at the very least not obviously useless.
I’m not sure what this is supposed to show besides the fact that the Simpsons reflects the culture that produced it.
I’d like to know what you think my position is, since in the above discussion I’ve found PhilGoetz’s posts to be closer to my position then your posts.
It wasn’t meant to show anything beyond tracing back the asymmetry concerning the two items in the culture that produced the Simpsons to a point where they might possibly be connected to non-status causes such as logistics.
That the restatement of the observation as “in our current culture, enjoying sex is considered high status, whereas enjoying punishment is considered low status” would contribute to finding the mechanism that causes the observation instead of being the type of word magic Eliezer saw it as, which implies that status mechanisms are part of the of the causal chain.
I am aware that you disagree with the particulars of the causal chain elements I suggested, but you haven’t proposed an alternative yet. If the restatement wasn’t meant to help explain I’ll retract my upvote.
I interpreted Eliezer’s comment as meaning that “high status” and “low status” are approximately synonymous with the things they’re being invoked to explain. (Or, at least, that no special motivation was given to expect status to play a role. It’s a reasonable heuristic for every social behavior to say, “Look for an explanation involving status”—but that also means it does not explain anything to say that; it’s the default assumption.)
That’s pretty much what I meant with “restatement” and “word magic”. As for default assumptions, that’s true if you don’t require the word status to do useful work compared to an equivalent explanation without that word, but “the word “status” will do useful work here” would be a productive statement to make if true.
It’s an interesting distinction, I’ll need to think more on how I feel about it.
I would certainly be hesitant to discard most evolutionary baggage, however. http://xkcd.com/592/
It doesn’t make sense from the utilitarian perspective of an omnipotent God centrally planning everything. However, in a situation where individual actors have some degree of autonomy so that higher-order game-theoretic effects are relevant, and where nobody has perfect information and foresight, it makes plenty of sense even for a strict utilitarian.
(This is one of the main reasons why I see no value in utilitarianism, since attempts to apply it that don’t reduce to other approaches like virtue ethics almost inevitably end up detached from reality.)
I would say, rather: Approaches like virtue ethics are collections of heuristics that can be used to verify a utilitarian ethics. A utilitarian approach that does not match up with these collections, does not conform to any common human ethical system.
That doesn’t make virtue ethics superior. It’s more observational (empirical), while a utilitarian approach can be more theoretical. A utilitarian approach has more coverage. Anyone trying to deal with the future will find utilitarian approaches more valuable than will people studying the past.
(The human ethical system is attached to reality—but is it attached in a normative way?)
There is another important point: a utilitarian may conclude that given imperfect information and limited ability to predict the consequences of acts reliably, it’s usually impossible to do meaningful explicit utility calculations, so that the best way to guide one’s action are heuristics such as virtue ethics (which corresponds closely with the intuitive folk ethics that everyone uses in real life anyway). I had this in mind when I wrote about utilitarianism “reducing” (probably not a good choice of word) to other ethical systems.
I’m not sure what exactly you mean by “attached in a normative way” here.
The opposite of whatever you meant when you said “detached from reality”.
This might be a misunderstanding then. What I meant by “detached from reality” is that attempts to do explicit utilitarian calculations for practical problems almost invariably end up working with unrealistic models and thus producing worthless and misguided conclusions, even if we agree that theoretical premises of utilitarianism are sound (not that I do). In contrast, the regular folk virtue ethics does produce workable guidelines in practice, and in this regard it is attached to reality. But what does the qualifier “attached [to reality] in a normative way” add to that observation?