An attempt to ‘explain away’ virtue ethics
Recently I summarized Joshua Greene’s attempt to ‘explain away’ deontological ethics by revealing the cognitive algorithms that generate deontological judgments and showing that the causes of our deontological judgments are inconsistent with normative principles we would endorse.
Mark Alfano has recently done the same thing with virtue ethics (which generally requires a fairly robust theory of character trait possession) in his March 2011 article on the topic:
I discuss the attribution errors, which are peculiar to our folk intuitions about traits. Next, I turn to the input heuristics and biases, which — though they apply more broadly than just to reasoning about traits — entail further errors in our judgments about trait-possession. After that, I discuss the processing heuristics and biases, which again apply more broadly than the attribution errors but are nevertheless relevant to intuitions about traits… I explain what the biases are, cite the relevant authorities, and draw inferences from them in order to show their relevance to the dialectic about virtue ethics. At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.
An overview of the ‘situationist’ attack on character trait possession can be found in Doris’ book Lack of Character.
Alfano says:
This sounds absurd on its face. If Alfano finds out that someone has a history of cheating and stealing, will he avoid having any business with this person, expecting similar behavior in the future, or will he “reject such knowledge-claims… based merely on folk intuitions”?
Are his claims really so silly, or am I missing something?
If a person’s history is to cheat in business, it might be that the person habitually and easily lies whenever on the phone and he or she can’t see who is on the other end. The person might be solidly in the middle of the bell curve for everything but predilection to dehumanization. (Scholarship FTW.)
Alternatively, the person might have a unique situation, such as being blind, isolated, and requiring a reader to speak out received emails in Stephen-Hawking voice, that is such that anyone would experience dehumanization sufficient to make them cheaters. (I’m not claiming this is the case, just that some of similarly plausible set-ups would cause actions, just as time since judges ate affects sentencing.)
So either virtue ethics breaks down as people’s uniqueness lies in their responses to biases and/or people’s being overwhelmingly, chaotically directed by features of their environments.
Either way, cheaters and thieves are likely to cheat or steal again.
If I can look someone in the face, can usually detect lying. Voice only, can often detect lying. Text only, can sometimes detect lying.
Thus if a person is honest in proportion to the bandwidth, this requires no more psychological explanation than the fact that burglars are apt to burgle at night.
Is that by the same way you can divine people’s true natures?
the Wizards Project tested 20,000 people to come up with 50 who panned out
an aggregation of techniques offered no better than 70% accuracy
people with no instructions did little better than chance in distinguishing lies and truth
But I suppose these results (and the failings of mechanical lie detectors) are just unscientific research, which pale next to the burning truth of your subjective conviction that you “can usually detect lying”.
What was the self-assuredness of the 20,000? What was the self-assuredness of the 50?
What was the ability of the top 100, or 1,000, as against the top 50?
Does any of that really matter? This is the same person who thinks a passel of cognitive biases doesn’t apply to him and that the whole field is nonsense trumped by unexamined common sense. (Talk about ‘just give up already’.)
If the top 200 lie-detectors were among the 400 most confident people at the outset, I would think that relevant.
And how likely is that, really?
This is the sort of desperate dialectics verging on logical rudeness I find really annoying, trying to rescue a baloney claim by any possibility. If you seriously think that, great—go read the papers and tell me and I will be duly surprised if the human lie-detectors are the best calibrated people in that 20,000 group and hence that factoid might apply to the person we are discussing.
Seems like homework for the person making the claim, I’m just pointing out it exists.
Nit-pick, they could be the worst calibrated and what I said would hold, provided the others estimated themselves suitably bad at it.
That academics who do not want to succeed in doing something tend to be grossly unsuccessful in doing something is weak evidence that it cannot be done.
Some business places have a lot of small high value stuff, easily stolen, and a lot of employees with unmonitored access to that stuff.
Somehow they succeed in selecting (as close to 100% as makes no difference) employees who do not steal.
The evidence that people cannot detect lying resembles the evidence that the scientific method is undefined and impossible.
The existence and practice of certain business places shows that some people are very good at predicting other people’s behavior, even when those people would prefer that they fail to predict that behavior.
These academics would be richly rewarded, in and out of academia, for finding human lie detectors and even more so for finding techniques to train people into such things. This is true for all the obvious reasons, and for the more subtle reason that saying ’99.75% of people suck and the ones who don’t think this are self-deluded’ is a negative result and academia punishes negative results.
(Also, bizarre ad hominem with no real world backing. How on earth are you getting upvotes?)
‘Shrinkage’ is and remains a problem in retail; the solutions to this have nothing to do with human lie detectors. The solutions involve filtering heavily for people who have demonstrated that they haven’t stolen in the past, summary termination upon theft, technological counter-measures, and elaborate social sanctions. If human lie detectors existed in such quantities or humans were so analyzable, why does do the diamond dealers of NYC resort to such desperate means as dealing as much as possible with their co-ethnics who have decades of reputation and social connections standing hostage for their business dealings?
(Non sequitur; how on earth is this getting upvoted?)
No evidence cited, and what is this juvenile relativism doing here?
I like how this looks like an argument, yet completely fails to include any information that matters at all. ‘existence and practice’, ‘certain business places’, ‘some people’ - all of these are empty of semantic content.
And even assuming you filled in these statements with something meaningful, so what? The point of the OP was not that predictions cannot be made about humans, the point is that the predictions are not made by a hypothetical ‘character’. Predictions made by situation are quite powerful, and I would expect that many businesses exploit this quite a bit in all sorts of ways, like placement of goods in grocery stores.
(Non sequitur again; good grief.)
Better not to go there.
When a businessman wants to detect liars, he is not going to turn to academia.
The strange inability of academia to detect a propensity to bad behavior, or to acknowledge that anyone else can detect such propensities is based on their horror of “discrimination”
Recall that you could tell the shoe bomber was a terrorist at forty paces, you could tell on sight that Umar Farouk Abdulmutallab was some kind of criminal and up to no good, and yet the TSA insists on groping the genitals of six year old girls.
Although academics can supposedly scientifically prove it is impossible to detect propensities to behave badly, they are able to do a remarkably good job at detecting the slightest propensity to engage in politically incorrect thoughts.
Retail has low value stuff, low wage employees. High value stuff, you hire more carefully, high wage employees, you can hire more carefully.
Retail has shrinkage because they don’t care that much about shrinkage. When they do care about shrinkage, they can and routinely do solve the problem, notwithstanding academics piously saying it cannot be done.
Yet, oddly, businesses bet on character all the time. That you cannot tell is political correctness that all normal people ridicule, much as they ridicule the TSA.
This seems obviously false to me. It may well be true that, in general, situational influences swamp dispositional ones. But that doesn’t mean that it’s pointless to try to cultivate virtue and teach yourself to behave virtuously. You might not always succeed, but as long as the effect of dispositional influences isn’t entirely neglible, you will succeed more often than if you didn’t cultivate virtue.
You could use the same reasoning to argue that consequentialism is in dire straits: Wanting to act in a consequentialist manner is a human disposition, but situational influences swamp dispositional ones. Thus, consequentialism cannot reasonably recommend that people act in a consequentialist manner, because that is not a possible property of “creatures like us”.
Alfano is entirely too strict about knowledge, though he rests comfortably in the philosophical landscape there. “Can we know on the basis of folk intuitions that we have traits” isn’t as interesting of a question when seen in these terms. He does not address the question “Are our folk intuitions about traits strong Bayesian evidence for their existence?” which would be required to dismiss consideration of folk intuitions entirely as he does. Thus, his claim “We need pay no heed to any attempt to defend virtue ethics that appeals only to intuitions about character traits” has not been proven satisfactorily.
Nonetheless, t’s very nice for him that he’s discovered that there are biases. Anyone who believes that virtue ethics is true should certainly be aware of the relevant ones.
I submit that the form of his argument could be used just as well against any knowledge claim using those definitions and picking some relevant biases.
Some excerpts:
I get the impression I can predict specific bad behavior pretty reliably, implying that folk wisdom can achieve markedly higher correlations that psychometric traits.
I find it amusing that I can quote a paper on how 5-10 cognitive biases lead us to think that there are stable predictable ‘character traits’ in people with major correlations, and then the first reply is someone saying that they think they see such traits.
I see.
Such papers come from a field of science whose claims to be scientific, whose claims to be a field of science, are far from universally accepted
Since its claims to be scientific are weak, any contradiction between its claims and common sense should be interpreted to its disfavor, and in favor of common sense.
It seems plausible that our capacity for moral judgment might mirror our capacity for belief formation in that it includes crude but efficient algorithms like what we call cognitive biases in belief formation. But I don’t think it follows that we can make our moral judgments ‘more accurate’ by removing moral ‘biases’ in favor of some idealized moral formula. What our crude but efficient moral heuristics are approximating is evolutionarily advantageous strategies for our memes and genes. But I don’t really care about replicating the things that programmed me—I just care about what they programmed me to care about.
In belief formation there are likely biases that have evolutionary benefits too- it is easier to deceive others if you sincerely believe you will cooperate when you are in a position to defect without retaliation, for example. But we have an outside standard to check our beliefs against—experience. We know after many iterations of prediction and experiment which reasons for beliefs are reliable and which are not. Obviously, a good epistemology is a lot trickier than I’ve made it sound but it seems like, in principle, we can make our beliefs more accurate by checking them against reality.
I can’t see an analogous standard for moral judgments. This wouldn’t be a big problem if our brains were cleanly divided into value-parts and belief-parts. We could then just fix the belief parts and keep the crude-but-hey-that’s-how-evolution-made-us value parts. But it seems like our values and beliefs are all mixed up in our cognitive soup. We need a sieve.
Tangential public advisory: I suspect that it is a bad cached pattern to focus on the abstraction where it is memes and genes that created you rather than, say, your ecological-developmental history or your self two years ago or various plausibly ideal futures you would like to bring about &c. In the context of decision theory I’ll sometimes talk about an agent inheriting the decision policy of its creator process which sometimes causes people to go “well I don’t want what evolution wants, nyahhh” which invariably makes me facepalm repeatedly in despair.
I do not see how the false consensus effect advances the argument.
LW post on an example used in the common, stronger argument against virtue ethics, that we have no character traits. Stronger in that it makes more ambitious claims, not because it is more likely true.
When a businessman wants to detect liars, he is not going to turn to academia.
The strange inability of academia to detect a propensity to bad behavior, or to acknowledge that anyone else can detect such propensities is based on their horror of “discrimination”
Though strangely, although they can supposedly scientifically prove it is impossible to detect propensities to behave badly, they are able to do an extremely good job at detecting the slightest propensity to engage in politically incorrect thoughts.