I agree with much of your worldview as I’ve interpreted it. In particular I agree that:
•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.
•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don’t have explicit understanding.
•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.
•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.
On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.
But you at least need some kind of feedback. “QALYs per hour of effort” is pretty decent.
Trouble is, once you go down that road, it’s likely that you’re going to come up with fatally misguided or biased conclusions.
What do you suggest as an alternative to MixedNuts’ suggestion?
As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one’s efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don’t suffice to give a practical applicable ethical theory.
Also, very few, if any things people do result in a clear Pareto improvement for everyone, and interpersonal trade-offs are inherently problematic.)
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don’t pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased.
Regarding this, though:
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are—and therefore they often may seem completely irrational or unfair by other standards.
Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don’t, the plan will backfire—maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.
I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased
I agree. And I like the rest of your response about tacitly shared focal points.
Part of what you may be running up against on LW is people here
(a) Having low intuitive sense for what these focal points are
(b) The existing norms being designed to be tolerable for ‘most people’ and LWers falling outside of ‘most people,’ and correspondingly finding existing norms intolerable with higher than usual frequency.
I know that each of (a) and (b) sometimes apply to me personally
Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..
I agree with much of your worldview as I’ve interpreted it. In particular I agree that:
•Behavioral norms evolved by natural selection to solve coordination problems and to allow humans to work together productively given the particulars of our biological hard-wiring.
•Many apparently logically sound departures from behavioral norms will not serve their intended functions for complicated reasons of which people don’t have explicit understanding.
•Human civilization is a complicated dynamical system which is (in some sense) at equilibrium and attempts to shift from this equilibrium will often either fail (because of equilibrating forces) or lead to disaster (on account of destabilizing the equilibrium and causing everything to fall apart.
•The standard for rigor and the accuracy in social sciences is often very poor owing to each of the biases of the researchers involved and the inherent complexity of the relevant problems (as you described in your top level post.
On the other hand, here and elsewhere in the thread you present criticism without offering alternatives. Criticism is not without value but its value is contingent on the existence of superior alternatives.
What do you suggest as an alternative to MixedNuts’ suggestion?
As rhollerith_dot_com said, folk ethics gives ambiguous prescriptions in many cases of practical import. One can avoid some such issues by focusing one’s efforts elsewhere, but not in all cases. People representative of the general population have strong differences of opinion as to what sorts of jobs are virtuous and what sorts of philanthropic activities are worthwhile. So folk ethics alone don’t suffice to give a practical applicable ethical theory.
But interpersonal trade-offs are also inevitable; it’s not as though one avoids the issue by avoiding consequentialism.
The discussion has drifted away somewhat from the original disagreement, which was about situations where a seemingly clear-cut consequentialist argument clashes with a nearly universal folk-ethical intuition (as exemplified by various trolley-type problems). I agree that folk ethics (and its natural customary and institutional outgrowths) are ambiguous and conflicted in some situations to the point of being useless as a guide, and the number of such situations may well increase with the technological developments in the future. I don’t pretend to have any great insight about these problems. In this discussion, I am merely arguing that when there is a conflict between a consequentialist (or other formal) argument and a folk-ethical intuition, it is strong evidence that there is something seriously wrong with the former, even if it’s entirely non-obvious what it might be, and it’s fallacious to automatically discard the latter as biased.
Regarding this, though:
The important point is that most conflicts get resolved in spontaneous, or at least tolerably costly ways because the conflicting parties tacitly share a focal point when an interpersonal trade-off is inevitable. The key insight here is that important focal points that enable things to run smoothly often lack any rational justification by themselves. What makes them valuable is simply that they are recognized as such by all the parties involved, whatever they are—and therefore they often may seem completely irrational or unfair by other standards.
Now, consequentialists may come up with a way of improving this situation by whatever measure of welfare they use. However, what they cannot do reliably is to make people accept the implied new interpersonal trade-offs as new focal points, and if they don’t, the plan will backfire—maybe with a spontaneous reversion to the status quo ante, and maybe with a disastrous conflict brought by the wrecking of the old network of tacit agreements. Of course, it may also happen that the new interpersonal trade-offs are accepted (whether enthusiastically or by forceful imposition) and the reform is successful. What is essential to recognize, however, is that interpersonal trade-offs are not only theoretically indeterminate, but also that any way of resolving them must deal with these complicated issues of whether it will be workable in practice. For this reason, many consequentialist designs that look great on paper are best avoided in practice.
Thanks for your response!
I agree. And I like the rest of your response about tacitly shared focal points.
Part of what you may be running up against on LW is people here (a) Having low intuitive sense for what these focal points are (b) The existing norms being designed to be tolerable for ‘most people’ and LWers falling outside of ‘most people,’ and correspondingly finding existing norms intolerable with higher than usual frequency.
I know that each of (a) and (b) sometimes apply to me personally
Your future remarks on this subject may be more lucid if you bring the content of your above comment to the fore at the outset..