simplistic consequentialist views such as this one
ignorance and lack of insight
Waaah! You’re a meanie mean-head! :( By which I mean: this was a one-sentence reaction to simplistic virtue ethics. I agree it’s not a valid criticism of complex systems like Alicorn’s tiered deontology. I also agree it’s fair to describe this view as simplistic—at the end of the day, I do in fact hold the naive view. I disagree that it can only exist in ignorance of counterarguments. In general, boiling down a position to one sentence provides no way to distinguish between “I don’t know any counterarguments” and “I know counterarguments, all of which I have rejected”.
supreme self-assuredness
Not sure what you mean, I’m going to map it onto “arrogance” until and unless I learn you meant otherwise. Arrogant people are annoying (hi, atheist blogosphere!), but in practice it isn’t correlated with false ideas.
Or is this just a regular accusation of overconfidence, stemming from “Hey, you underestimate the number of arguments you haven’t considered!”?
my responses in the Consequentialism FAQ thread
You go into social-norms-as-Schelling-points in detail (you seem to point at the existence of other strong arguments?); I agree about the basic idea (that’s why I don’t kill for organs). I disagree about how easily we should violate them. (In particular, Near lives are much safer to trade than Far ones.) Even “Only kill without provocation in the exact circumstances of one of the trolley problems” is a feasible change.
Also, least convenient possible world: after the experiment, everyone in the world goes into a holodeck and never interacts with anyone again.
Interestingly, when you said
Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero.
I automatically pictured myself as the fat man, and felt admiration and gratitude for the heroic sociopath. Then I realized you meant a third party, and did feel creeped out. (This is as it should be; I should be more eager to die than to kill, to correct for selfishness.)
By which I mean: this was a one-sentence reaction to simplistic virtue ethics.
Actually, I was writing in favor of “simplistic” virtue ethics. However simplistic and irrational it may seem, and however rational, sophisticated, and logically airtight the consequentialist alternatives may appear to be, folk virtue ethics is a robust and workable way of managing human interaction and coordination, while consequentialist reasoning is usually at best simply wrong and at worst a rationalization of beliefs held for different (and often ugly) reasons.
You can compare it with folk physics vs. scientific physics. The former has many flaws, but even if you’re a physicist, for nearly all things you do in practice, scientific physics is useless, while folk physics works great. (You won’t learn to ride a bike or throw a ball by studying physics, but by honing your folk physics instincts.) While folk physics works robustly and reliably in complex and messy real-world situations, handling them with scientific physics is often intractable and always prone to error.
Of course, this comparison is too favorable. We do know enough scientific physics to apply it to almost any situation at least in principle, and there are many situations where we know how to apply it successfully with real accuracy and rigor, and where folk physics is useless or worse. In contrast, attempts to supersede folk virtue ethics with consequentialism are practically always fallacious one way or another.
So, the fully naive system? Killing makes you a bad person, letting people die is neutral; saving lives makes you a good person, letting people live is neutral. Giving to charity is good, because sacrifice and wanting to help makes you a good person. There are sacred values (e.g. lives) and mundane ones (e.g. money) and trading between them makes you a bad person. What matters is being a good person, not effects like expected number of deaths, so running cost-benefit analyses is at best misguided and at worst evil. Is this a fair description of folk ethics?
If so, I would argue that the bar for doing better is very, very low. There are a zillion biases that apply: scope insensitivity, loss aversion that flips decisions depending on framing, need for closure, pressure to conform, Near/Far discrepancies, fuzzy judgements that mix up feasible and desirable, outright wishful thinking, prejudice against outgroups, overconfidence, and so on. In ethics, unless you’re going to get punished for defecting against a norm, you don’t have a stake, so biases can run free and don’t get any feedback.
Now there are consequentialist arguments for virtue ethics, and general majoritarian-ish arguments for “norms aren’t completely stupid”, so this only argues for “keep roughly the same system but correct for known biases”. But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
And this is a consequentialist argument. “If I try to kill some to save more, I’ll almost certainly overestimate lives saved and underestimate knock-on effects” is a perfectly good argument. “Killing some to save more makes me a bad person”… not so much.
No, because we don’t even know (yet?) how to formulate such a description. The actual decision procedures in our heads have still not been reverse-engineered, and even insofar as they have, they have still not been explained in game-theoretical and other important terms. We have only started to scratch the surface in this respect.
(Note also that there is a big difference between the principles that people will affirm in the abstract and those they apply in practice, and these inconsistencies are also still far from being fully explained.)
But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions. For practically any problem that’s complicated enough to be realistic and interesting, we lack the necessary knowledge and computational resources to to make reliable consequentialist assessments, in terms of QALY or any other standardized measure of welfare. (Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
Moreover, for any problem that is relevant for questions of power, status, wealth, and ideology, it’s practically impossible to avoid biases. At the end, what looks like a dispassionate and perhaps even scientific attempt to evaluate things using some standardized measure of welfare is more likely than not to be just a sophisticated fig-leaf (conscious or not) for some ideological agenda. (Most notably, the majority of what we call “social science” has historically been developed for that purpose.)
Yes, this is a very pessimistic verdict, but an attempt at sound reasoning should start by recognizing the limits of our knowledge.
I agree with much of your worldview as I’ve interpreted it. In particular I agree that:
•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.
•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don’t have explicit understanding.
•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.
•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.
On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.
But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions.
What do you suggest as an alternative to MixedNuts’ suggestion?
As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one’s efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don’t suffice to give a practical applicable ethical theory.
Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don’t pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased.
Regarding this, though:
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are—and therefore they often may seem completely irrational or unfair by other standards.
Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don’t, the plan will backfire—maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.
I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased
I agree. And I like the rest of your response about tacitly shared focal points.
Part of what you may be running up against on LW is people here
(a) Having low intuitive sense for what these focal points are
(b) The existing norms being designed to be tolerable for ‘most people’ and LWers falling outside of ‘most people,’ and correspondingly finding existing norms intolerable with higher than usual frequency.
I know that each of (a) and (b) sometimes apply to me personally
Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..
Okay, I don’t get it. I can only parse what you’re saying one of two ways:
“We don’t have any idea of folk ethics works.” But that’s not true, we know it’s not “whatever emperor Ming says”. We can and do observe folk ethics at work, and notice it favors ingroups, is loss averse, is scope insensitive, etc.
“Any attempt to do better won’t be perfectly free of bias. Therefore, you can’t do better. Therefore, the best you can do is to use folk ethics… which has a bunch of known biases.”
You very likely don’t mean either of these, so I don’t know what you’re trying to say.
These statements are a bit crude and exaggerated version of what I had in mind, but they’re actually not that far off the mark.
The basic human folk ethics, shaped within certain bounds by culture, is amazingly successful in ensuring human coordination and cooperation in practice, at both small and large scales. (The fact that we see its occasional bad failures as dramatic and tragic only shows that we’re used to it working great most of the time.) The key issue here is that these coordination problems are extremely hard and largely beyond our understanding. While we can predict with some accuracy how individual humans behave, the problems of coordinating groups of people involve countless complicated issues of game theory, signaling, etc., about which we’re still largely ignorant. In this sense, we really don’t understand how folk ethics works.
Now, the important thing to note is that various aspects of folk ethics may seem as irrational and biased (in the sense that changing them would have positive consequences by some reasonable measure), while in fact the truth is much more complicated. These “biases” may in fact be essential for the way human coordination works in practice for some reason that’s still mysterious to us. Even if they don’t have any direct useful purpose, it may well be that given the constraints of human minds, eliminating them is impossible without breaking something else badly. (A prime example is that once someone goes down the road of breaking intuitively appealing folk ethics principles in the name of consequentialist calculations, it’s practically certain that these calculations will end up being fatally biased.)
Here I have of course handwaved the question of how exactly successful human cooperation depends on the culture-specific content of people’s folk ethics. That question is fascinating, complicated, and impossible to tackle without opening all sorts of ideologically charged issues. But in any case, it presents even further complications and difficulties for any attempt at analyzing and fixing human intuitions by consequentialist reasoning.
(Also, similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive “rationalist” perspective, but whose role in practice is much more complicated and important.)
similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive “rationalist” perspective, but whose role in practice is much more complicated and important.
Yeah, that seems to be the crux of our disagreement. You still trust people, you haven’t seen them march into death and drag their children along with them and reject a thousand warnings along the way with contempt for such absurd and evil suggestions.
I agree that going against social norms is very costly, that we need cooperation more than ever now there’s seven billion of us, and that if something is bad you still need to coordinate against it. But consider this anecdote:
Many years ago, when I was but a child, I wished to search for the best and rightest politician, and to put them in power. And eagerly did I listen to all, and carefully did I consider their arguments, and honestly did I weight them against history and the evening news. And lo, an ideology was born, and I gave it my allegiance. But still doubts nagged and arguments wavered, and I wished for closure.
One day my politician of choice called for a rally, and to the rally I went; filled with doubt, but willing to serve. And such joy came upon me that I knew I was right; this wave of bliss was the true sign that my cause was just. (For I was but a child, and did not know of laws of entanglement; I knew not human psychology told not of world states.)
Then it came to pass that I read a history textbook, and in the book was an excerpt from Robert Brasillach, who too described this joy, and who too claimed it as proof of his ideology. Which was facism. Oops.
Could you say more about what makes folks ethics a form of virtue ethics (or at least sufficiently virtue-based for you to use the term “folk virtue ethics”)? I can see some aspects of it that are virtue-based, but overall it seems like a hodgepodge of different intuitions/emotions/etc.
Yes, it’s certainly not a clear-cut classification. However, I’d say that the principal mechanisms of folk ethics are very much virtue-based, i.e. they revolve around asking what sort of person acts in a particular way, and what can be inferred about others’ actions and one’s own choice of actions from that.
Your praise for folk ethics would be more persuasive to me, Vladimir, if it came with a description of folk ethics—and if that description explained how folk ethics avoids giving ambiguous answers in many important situations—because it seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.
In other words, although I am sympathetic to arguments for conservatism in matter of interpersonal relationships and social institutions, your argument would be a whole lot stronger if the process of identifying or determining the thing being argued for did not rely entirely on the phrase “folk virtue ethics”.
I don’t think we need to get into any controversial questions about interpersonal relationships and social institutions here. (Although the arguments I’ve made apply to these too.) I’d rather focus on the entirely ordinary, mundane, and uncontroversial instances of human cooperation and coordination. With this in mind, I think you’re making a mistake when you write:
[I]t seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.
In fact, the overwhelming part of folk ethics consists of decisions that are so ordinary and uncontroversial that we don’t even stop to think about them, and of interactions (and the resulting social norms and institutions) that are taken completely for granted by everyone—even though the complexity of the underlying coordination problems is enormous, and the way things really work is still largely mysterious to us. The thesis I’m advancing is that a lot of what may seem like bias and imperfection in folk ethics may in fact somehow be essential for the way these problems get solved, and seemingly airtight consequentialist arguments against clear folk-ethical intuitions may in fact be fatally flawed in this regard. (And I think they nearly always are.)
Now, if we move to the question of what happens in those exceptional situations where there is controversy and conflict, things do get more complicated. Here it’s important to note that the boundary between regular smooth human interactions and conflicts is fuzzy, insofar as the regular interactions often involve conflict resolution in regular and automatic ways, and there are no sharp limits between such events and more overt and dramatic conflict. Also, there is no sharp bound between entirely instinctive folk ethics intuitions and those that are codified in more explicit social (and ultimately legal) norms.
And here we get to the controversies that you mention: the conflict between social and legal norms that embody and formalize folk intuitions of justice, fairness, proper behavior, etc. and evolve spontaneously through tradition, precedent, customary practice, etc., and the attempts to replace such norms by new ones backed by consequentialist arguments. Here, indeed, one can argue in favor of what you call “conservatism in matter of interpersonal relationships and social institutions” using very similar arguments to the mine above. But whether or not you agree with such arguments, my main point can be made without even getting into any controversial issues.
Waaah! You’re a meanie mean-head! :( By which I mean: this was a one-sentence reaction to simplistic virtue ethics. I agree it’s not a valid criticism of complex systems like Alicorn’s tiered deontology. I also agree it’s fair to describe this view as simplistic—at the end of the day, I do in fact hold the naive view. I disagree that it can only exist in ignorance of counterarguments. In general, boiling down a position to one sentence provides no way to distinguish between “I don’t know any counterarguments” and “I know counterarguments, all of which I have rejected”.
Not sure what you mean, I’m going to map it onto “arrogance” until and unless I learn you meant otherwise. Arrogant people are annoying (hi, atheist blogosphere!), but in practice it isn’t correlated with false ideas.
Or is this just a regular accusation of overconfidence, stemming from “Hey, you underestimate the number of arguments you haven’t considered!”?
You go into social-norms-as-Schelling-points in detail (you seem to point at the existence of other strong arguments?); I agree about the basic idea (that’s why I don’t kill for organs). I disagree about how easily we should violate them. (In particular, Near lives are much safer to trade than Far ones.) Even “Only kill without provocation in the exact circumstances of one of the trolley problems” is a feasible change.
Also, least convenient possible world: after the experiment, everyone in the world goes into a holodeck and never interacts with anyone again.
Interestingly, when you said
I automatically pictured myself as the fat man, and felt admiration and gratitude for the heroic sociopath. Then I realized you meant a third party, and did feel creeped out. (This is as it should be; I should be more eager to die than to kill, to correct for selfishness.)
Actually, I was writing in favor of “simplistic” virtue ethics. However simplistic and irrational it may seem, and however rational, sophisticated, and logically airtight the consequentialist alternatives may appear to be, folk virtue ethics is a robust and workable way of managing human interaction and coordination, while consequentialist reasoning is usually at best simply wrong and at worst a rationalization of beliefs held for different (and often ugly) reasons.
You can compare it with folk physics vs. scientific physics. The former has many flaws, but even if you’re a physicist, for nearly all things you do in practice, scientific physics is useless, while folk physics works great. (You won’t learn to ride a bike or throw a ball by studying physics, but by honing your folk physics instincts.) While folk physics works robustly and reliably in complex and messy real-world situations, handling them with scientific physics is often intractable and always prone to error.
Of course, this comparison is too favorable. We do know enough scientific physics to apply it to almost any situation at least in principle, and there are many situations where we know how to apply it successfully with real accuracy and rigor, and where folk physics is useless or worse. In contrast, attempts to supersede folk virtue ethics with consequentialism are practically always fallacious one way or another.
So, the fully naive system? Killing makes you a bad person, letting people die is neutral; saving lives makes you a good person, letting people live is neutral. Giving to charity is good, because sacrifice and wanting to help makes you a good person. There are sacred values (e.g. lives) and mundane ones (e.g. money) and trading between them makes you a bad person. What matters is being a good person, not effects like expected number of deaths, so running cost-benefit analyses is at best misguided and at worst evil. Is this a fair description of folk ethics?
If so, I would argue that the bar for doing better is very, very low. There are a zillion biases that apply: scope insensitivity, loss aversion that flips decisions depending on framing, need for closure, pressure to conform, Near/Far discrepancies, fuzzy judgements that mix up feasible and desirable, outright wishful thinking, prejudice against outgroups, overconfidence, and so on. In ethics, unless you’re going to get punished for defecting against a norm, you don’t have a stake, so biases can run free and don’t get any feedback.
Now there are consequentialist arguments for virtue ethics, and general majoritarian-ish arguments for “norms aren’t completely stupid”, so this only argues for “keep roughly the same system but correct for known biases”. But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
And this is a consequentialist argument. “If I try to kill some to save more, I’ll almost certainly overestimate lives saved and underestimate knock-on effects” is a perfectly good argument. “Killing some to save more makes me a bad person”… not so much.
No, because we don’t even know (yet?) how to formulate such a description. The actual decision procedures in our heads have still not been reverse-engineered, and even insofar as they have, they have still not been explained in game-theoretical and other important terms. We have only started to scratch the surface in this respect.
(Note also that there is a big difference between the principles that people will affirm in the abstract and those they apply in practice, and these inconsistencies are also still far from being fully explained.)
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions. For practically any problem that’s complicated enough to be realistic and interesting, we lack the necessary knowledge and computational resources to to make reliable consequentialist assessments, in terms of QALY or any other standardized measure of welfare. (Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
Moreover, for any problem that is relevant for questions of power, status, wealth, and ideology, it’s practically impossible to avoid biases. At the end, what looks like a dispassionate and perhaps even scientific attempt to evaluate things using some standardized measure of welfare is more likely than not to be just a sophisticated fig-leaf (conscious or not) for some ideological agenda. (Most notably, the majority of what we call “social science” has historically been developed for that purpose.)
Yes, this is a very pessimistic verdict, but an attempt at sound reasoning should start by recognizing the limits of our knowledge.
I agree with much of your worldview as I’ve interpreted it. In particular I agree that:
•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.
•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don’t have explicit understanding.
•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.
•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.
On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.
What do you suggest as an alternative to MixedNuts’ suggestion?
As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one’s efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don’t suffice to give a practical applicable ethical theory.
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don’t pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased.
Regarding this, though:
The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are—and therefore they often may seem completely irrational or unfair by other standards.
Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don’t, the plan will backfire—maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.
Thanks for your response!
I agree. And I like the rest of your response about tacitly shared focal points.
Part of what you may be running up against on LW is people here (a) Having low intuitive sense for what these focal points are (b) The existing norms being designed to be tolerable for ‘most people’ and LWers falling outside of ‘most people,’ and correspondingly finding existing norms intolerable with higher than usual frequency.
I know that each of (a) and (b) sometimes apply to me personally
Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..
Okay, I don’t get it. I can only parse what you’re saying one of two ways:
“We don’t have any idea of folk ethics works.” But that’s not true, we know it’s not “whatever emperor Ming says”. We can and do observe folk ethics at work, and notice it favors ingroups, is loss averse, is scope insensitive, etc.
“Any attempt to do better won’t be perfectly free of bias. Therefore, you can’t do better. Therefore, the best you can do is to use folk ethics… which has a bunch of known biases.”
You very likely don’t mean either of these, so I don’t know what you’re trying to say.
These statements are a bit crude and exaggerated version of what I had in mind, but they’re actually not that far off the mark.
The basic human folk ethics, shaped within certain bounds by culture, is amazingly successful in ensuring human coordination and cooperation in practice, at both small and large scales. (The fact that we see its occasional bad failures as dramatic and tragic only shows that we’re used to it working great most of the time.) The key issue here is that these coordination problems are extremely hard and largely beyond our understanding. While we can predict with some accuracy how individual humans behave, the problems of coordinating groups of people involve countless complicated issues of game theory, signaling, etc., about which we’re still largely ignorant. In this sense, we really don’t understand how folk ethics works.
Now, the important thing to note is that various aspects of folk ethics may seem as irrational and biased (in the sense that changing them would have positive consequences by some reasonable measure), while in fact the truth is much more complicated. These “biases” may in fact be essential for the way human coordination works in practice for some reason that’s still mysterious to us. Even if they don’t have any direct useful purpose, it may well be that given the constraints of human minds, eliminating them is impossible without breaking something else badly. (A prime example is that once someone goes down the road of breaking intuitively appealing folk ethics principles in the name of consequentialist calculations, it’s practically certain that these calculations will end up being fatally biased.)
Here I have of course handwaved the question of how exactly successful human cooperation depends on the culture-specific content of people’s folk ethics. That question is fascinating, complicated, and impossible to tackle without opening all sorts of ideologically charged issues. But in any case, it presents even further complications and difficulties for any attempt at analyzing and fixing human intuitions by consequentialist reasoning.
(Also, similar reasoning applies not just to folk ethics vs. consequentialism, but also to all sorts of beliefs that may seem as outright irrational from a naive “rationalist” perspective, but whose role in practice is much more complicated and important.)
Yeah, that seems to be the crux of our disagreement. You still trust people, you haven’t seen them march into death and drag their children along with them and reject a thousand warnings along the way with contempt for such absurd and evil suggestions.
I agree that going against social norms is very costly, that we need cooperation more than ever now there’s seven billion of us, and that if something is bad you still need to coordinate against it. But consider this anecdote:
Many years ago, when I was but a child, I wished to search for the best and rightest politician, and to put them in power. And eagerly did I listen to all, and carefully did I consider their arguments, and honestly did I weight them against history and the evening news. And lo, an ideology was born, and I gave it my allegiance. But still doubts nagged and arguments wavered, and I wished for closure.
One day my politician of choice called for a rally, and to the rally I went; filled with doubt, but willing to serve. And such joy came upon me that I knew I was right; this wave of bliss was the true sign that my cause was just. (For I was but a child, and did not know of laws of entanglement; I knew not human psychology told not of world states.)
Then it came to pass that I read a history textbook, and in the book was an excerpt from Robert Brasillach, who too described this joy, and who too claimed it as proof of his ideology. Which was facism. Oops.
So, yeah, never falling for that one again.
Could you say more about what makes folks ethics a form of virtue ethics (or at least sufficiently virtue-based for you to use the term “folk virtue ethics”)? I can see some aspects of it that are virtue-based, but overall it seems like a hodgepodge of different intuitions/emotions/etc.
Yes, it’s certainly not a clear-cut classification. However, I’d say that the principal mechanisms of folk ethics are very much virtue-based, i.e. they revolve around asking what sort of person acts in a particular way, and what can be inferred about others’ actions and one’s own choice of actions from that.
Your praise for folk ethics would be more persuasive to me, Vladimir, if it came with a description of folk ethics—and if that description explained how folk ethics avoids giving ambiguous answers in many important situations—because it seems to me that a large part of this folk ethics of which you speak consists of people attempting to gain advantages over rivals and potential rivals by making folk-ethical claims that advance their personal interests.
In other words, although I am sympathetic to arguments for conservatism in matter of interpersonal relationships and social institutions, your argument would be a whole lot stronger if the process of identifying or determining the thing being argued for did not rely entirely on the phrase “folk virtue ethics”.
I don’t think we need to get into any controversial questions about interpersonal relationships and social institutions here. (Although the arguments I’ve made apply to these too.) I’d rather focus on the entirely ordinary, mundane, and uncontroversial instances of human cooperation and coordination. With this in mind, I think you’re making a mistake when you write:
In fact, the overwhelming part of folk ethics consists of decisions that are so ordinary and uncontroversial that we don’t even stop to think about them, and of interactions (and the resulting social norms and institutions) that are taken completely for granted by everyone—even though the complexity of the underlying coordination problems is enormous, and the way things really work is still largely mysterious to us. The thesis I’m advancing is that a lot of what may seem like bias and imperfection in folk ethics may in fact somehow be essential for the way these problems get solved, and seemingly airtight consequentialist arguments against clear folk-ethical intuitions may in fact be fatally flawed in this regard. (And I think they nearly always are.)
Now, if we move to the question of what happens in those exceptional situations where there is controversy and conflict, things do get more complicated. Here it’s important to note that the boundary between regular smooth human interactions and conflicts is fuzzy, insofar as the regular interactions often involve conflict resolution in regular and automatic ways, and there are no sharp limits between such events and more overt and dramatic conflict. Also, there is no sharp bound between entirely instinctive folk ethics intuitions and those that are codified in more explicit social (and ultimately legal) norms.
And here we get to the controversies that you mention: the conflict between social and legal norms that embody and formalize folk intuitions of justice, fairness, proper behavior, etc. and evolve spontaneously through tradition, precedent, customary practice, etc., and the attempts to replace such norms by new ones backed by consequentialist arguments. Here, indeed, one can argue in favor of what you call “conservatism in matter of interpersonal relationships and social institutions” using very similar arguments to the mine above. But whether or not you agree with such arguments, my main point can be made without even getting into any controversial issues.