I agree with what you wrote; but I don’t think you singled out what’s going wrong in The Moral Landscape.
Sam Harris has an argument against absolute moral relativism. There really are absolute moral relativists out there, who say that any moral code is as good as any other, and no one should think poorly of Jeffrey Dahmer because he likes to murder men and screw their corpses. I think there are even people reading LW who think they think that. And Sam says, That kind of talk should not be admitted into the discussion. If you can’t pass the bar of saying “hurting people is bad”, then you shouldn’t be allowed to help work out the social contract. We should all be able to agree that hurting people is bad.
Furthermore, the people who are hurting other people badly and systematically, really do believe that hurting people is bad; they just have demonstrably false beliefs, religious or political, that cause them to think that their actions are helping people in the long run. This is largely true, though I don’t think Sam understands the mentality of believers as much as he thinks he does.
So, much pain and suffering could be prevented if we said, Hey, you say we need to do X because Y, so let’s figure out whether Y is true… with SCIENCE!
The problem is, this is not enough material to write a book. So Harris makes his claim much more sweeping than it is. He throws people who say that different societies can have different values in together with absolute relativists, to make it seem like Harris is a lone voice crying in the wilderness. He argues, erroneously, that his one simple principle is enough to build a moral code upon. When I got to the part where Harris takes the correct objection that minimizing total harm may be unjust—and instead of agreeing, argues that minimizing total harm will happily work out to be perfectly just because deep inside everyone is unselfish and wants other people to be happy—I gave up and stopped reading.
Sam Harris has an argument against absolute moral relativism. There really are absolute moral relativists out there, who say that any moral code is as good as any other, and no one should think poorly of Jeffrey Dahmer because he likes to murder men and screw their corpses.
The way you put it obscures one extremely important difference, namely that between individuals who behave in ways that could never be a general norm in a stable and functional society and societies that are functional and stable even though their norms are extremely different from ours. As far as I see, the supposed relativists who wish to excuse Jeffrey Dahmer are just a conveniently ridiculous strawman for cultural relativism that applies only to other functional and stable societies distant in space or time, which is much more difficult (if at all possible) to refute.
Now you say:
If you can’t pass the bar of saying “hurting people is bad”, then you shouldn’t be allowed to help work out the social contract. We should all be able to agree that hurting people is bad.
But in fact, there’s going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical.
But in fact, there’s going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical
This is an invalid objection. Hurting people is bad; therefore, we want to minimize hurting people. Saying “but you can’t bring hurt down to zero” is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.
Also, referring to the “usual problems with utilitarianism and social engineering” literally says that there are problems with utilitarianism and social engineering (which is true), but falsely implies that (a) utilitarianism has more or even as many problems as any other approach, and (b) that attempting to optimize for something is more like “social engineering” than other alternatives.
Saying “but you can’t bring hurt down to zero” is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.
You speak of “social welfare” as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen. (And even if such a definition could be agreed upon, there is still almost unlimited leeway to argue over how it could be best maximized, since we lack central planners with godlike powers.)
If we’re going to discuss reworking of the social contract, I prefer straight talk about who gets to have power and status, rather than attempts to obscure this question by talking in terms of some supposedly objective, but in fact entirely ghostlike, aggregate utilities at the level of the whole society.
Also, referring to the “usual problems with utilitarianism and social engineering” literally says that there are problems with utilitarianism and social engineering (which is true), but falsely implies that (a) utilitarianism has more or even as many problems as any other approach, and (b) that attempting to optimize for something is more like “social engineering” than other alternatives.
I’d probably word it a bit differently myself, but I think (a) and (b) are in fact true.
Saying “but you can’t bring hurt down to zero” is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.
You speak of “social welfare” as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen.
Note the position of “social welfare” in that sentence. It’s in a subordinate clause, describing a common behavior that I use as justification for taking special exception to something you said. So it’s two steps removed from what we’re arguing about. The important part of my sentence is the first part, “Saying ‘you can’t bring hurt down to zero’ is an invalid objection.” “Hurting people is bad” is not very controversial. You’re taking a minor, tangential subordinate clause, which is unimportant and not worth defending in this context, and replying as if you were objecting to my point.
I don’t mean that you’re trying to do this, but this is a classic Dark Arts technique—if your goal is to say “hurting people is bad” is controversial, you instead pick out something else in the same sentence that is controversial, and point that out.
I also didn’t mean to say that you are pernicious or have ill-intent—just that the objection I was replying to is one that upsets me because it is commonly used in a Dark Arts way.
I’d probably word it a bit differently myself, but I think (a) and (b) are in fact true.
Fair enough—it implies (a) and (b), whether true or false.
I say it isn’t theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases. A “non-utilitarian” approach just means an incomplete approach that leaves a mostly random set of possible cases unhandled, because it doesn’t produce a complete ordering of values over possible worlds. It’s like having a rule that’s missing most of the markings.
I say it isn’t theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases.
“Improved” is a tricky word here. If you’re discussing the position of an almighty god contemplating the universe, then yes, I agree. But when it comes to practical questions of human social order and coordination and arbitration of human interactions, the idea that such questions can be answered in practice by contemplating and maximizing some sort of universal welfare function, i.e some global aggregate utility, is awful hubris that is guaranteed to backfire in complete disaster—Hayek’s “fatal conceit,” if you will.
A fair point, but given the facts of the matter, I’d say that the qualification “guaranteed” needs to be toned down only slightly to make the utterance reasonably modest. (And since I’m writing on LW, I should perhaps be explicit that I’m not considering the hypothetical future appearance of some superhuman intelligence, but the regular human social life and organization.)
I think what’s going on is you’re getting annoyed by naive applications of utilitarian reasoning such as Yvain’s in the offense thread, then improperly generalizing that annoyance to even sophisticated applications.
On the contrary, it is the “sophisticated” applications that annoy me the most.
I don’t think it’s reasonable to get annoyed by people’s opinions expressed in purely intellectual debates such as those we have here, as long as they are argued politely, honestly, and intelligently. However, out there in the real world, among the people who wield power, influence, and status, there is a great deal of hubristic and pernicious utilitarian ideas, which are dangerous exactly because they have the public image of high status and sophistication. They go under all sorts of different monikers, and can be found in all major ideological camps (their distribution is of course not random, but let’s not go there). What they all have in common is this seemingly smart, sophisticated, and scientific, but in fact spectacularly delusional attitude that things can be planned and regulated on a society-wide (or even world-wide) scale by some supposedly scientific methods for maximizing various measures of aggregate welfare.
The most insane and dangerous of such ideas, namely the old-school economic central planning, is fortunately no longer widely popular (though a sizable part of the world had to be wrecked before its craziness finally became undeniable). The ones that are flourishing today are less destructive, at least in the short to medium run, but they are at the same time more difficult to counter, since the evidence of their failure is less obvious and easier to rationalize away. Unfortunately, here I would have to get into sensitive ideological issues to provide more concrete analysis and examples.
But in fact, there’s going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical.
Isn’t that pretty much the entire question of political philosophy? There’s a reason politics is bad for rationality: it’s basically about hurting people.
My review of The Moral Landscape.
I agree with what you wrote; but I don’t think you singled out what’s going wrong in The Moral Landscape.
Sam Harris has an argument against absolute moral relativism. There really are absolute moral relativists out there, who say that any moral code is as good as any other, and no one should think poorly of Jeffrey Dahmer because he likes to murder men and screw their corpses. I think there are even people reading LW who think they think that. And Sam says, That kind of talk should not be admitted into the discussion. If you can’t pass the bar of saying “hurting people is bad”, then you shouldn’t be allowed to help work out the social contract. We should all be able to agree that hurting people is bad.
Furthermore, the people who are hurting other people badly and systematically, really do believe that hurting people is bad; they just have demonstrably false beliefs, religious or political, that cause them to think that their actions are helping people in the long run. This is largely true, though I don’t think Sam understands the mentality of believers as much as he thinks he does.
So, much pain and suffering could be prevented if we said, Hey, you say we need to do X because Y, so let’s figure out whether Y is true… with SCIENCE!
The problem is, this is not enough material to write a book. So Harris makes his claim much more sweeping than it is. He throws people who say that different societies can have different values in together with absolute relativists, to make it seem like Harris is a lone voice crying in the wilderness. He argues, erroneously, that his one simple principle is enough to build a moral code upon. When I got to the part where Harris takes the correct objection that minimizing total harm may be unjust—and instead of agreeing, argues that minimizing total harm will happily work out to be perfectly just because deep inside everyone is unselfish and wants other people to be happy—I gave up and stopped reading.
The way you put it obscures one extremely important difference, namely that between individuals who behave in ways that could never be a general norm in a stable and functional society and societies that are functional and stable even though their norms are extremely different from ours. As far as I see, the supposed relativists who wish to excuse Jeffrey Dahmer are just a conveniently ridiculous strawman for cultural relativism that applies only to other functional and stable societies distant in space or time, which is much more difficult (if at all possible) to refute.
Now you say:
But in fact, there’s going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical.
This is an invalid objection. Hurting people is bad; therefore, we want to minimize hurting people. Saying “but you can’t bring hurt down to zero” is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.
Also, referring to the “usual problems with utilitarianism and social engineering” literally says that there are problems with utilitarianism and social engineering (which is true), but falsely implies that (a) utilitarianism has more or even as many problems as any other approach, and (b) that attempting to optimize for something is more like “social engineering” than other alternatives.
You speak of “social welfare” as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen. (And even if such a definition could be agreed upon, there is still almost unlimited leeway to argue over how it could be best maximized, since we lack central planners with godlike powers.)
If we’re going to discuss reworking of the social contract, I prefer straight talk about who gets to have power and status, rather than attempts to obscure this question by talking in terms of some supposedly objective, but in fact entirely ghostlike, aggregate utilities at the level of the whole society.
I’d probably word it a bit differently myself, but I think (a) and (b) are in fact true.
Note the position of “social welfare” in that sentence. It’s in a subordinate clause, describing a common behavior that I use as justification for taking special exception to something you said. So it’s two steps removed from what we’re arguing about. The important part of my sentence is the first part, “Saying ‘you can’t bring hurt down to zero’ is an invalid objection.” “Hurting people is bad” is not very controversial. You’re taking a minor, tangential subordinate clause, which is unimportant and not worth defending in this context, and replying as if you were objecting to my point.
I don’t mean that you’re trying to do this, but this is a classic Dark Arts technique—if your goal is to say “hurting people is bad” is controversial, you instead pick out something else in the same sentence that is controversial, and point that out.
I also didn’t mean to say that you are pernicious or have ill-intent—just that the objection I was replying to is one that upsets me because it is commonly used in a Dark Arts way.
Fair enough—it implies (a) and (b), whether true or false.
I say it isn’t theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases. A “non-utilitarian” approach just means an incomplete approach that leaves a mostly random set of possible cases unhandled, because it doesn’t produce a complete ordering of values over possible worlds. It’s like having a rule that’s missing most of the markings.
“Improved” is a tricky word here. If you’re discussing the position of an almighty god contemplating the universe, then yes, I agree. But when it comes to practical questions of human social order and coordination and arbitration of human interactions, the idea that such questions can be answered in practice by contemplating and maximizing some sort of universal welfare function, i.e some global aggregate utility, is awful hubris that is guaranteed to backfire in complete disaster—Hayek’s “fatal conceit,” if you will.
To a decent first approximation, you’re not allowed to use the words “hubris” and “guaranteed” in the same sentence.
A fair point, but given the facts of the matter, I’d say that the qualification “guaranteed” needs to be toned down only slightly to make the utterance reasonably modest. (And since I’m writing on LW, I should perhaps be explicit that I’m not considering the hypothetical future appearance of some superhuman intelligence, but the regular human social life and organization.)
I think what’s going on is you’re getting annoyed by naive applications of utilitarian reasoning such as Yvain’s in the offense thread, then improperly generalizing that annoyance to even sophisticated applications.
On the contrary, it is the “sophisticated” applications that annoy me the most.
I don’t think it’s reasonable to get annoyed by people’s opinions expressed in purely intellectual debates such as those we have here, as long as they are argued politely, honestly, and intelligently. However, out there in the real world, among the people who wield power, influence, and status, there is a great deal of hubristic and pernicious utilitarian ideas, which are dangerous exactly because they have the public image of high status and sophistication. They go under all sorts of different monikers, and can be found in all major ideological camps (their distribution is of course not random, but let’s not go there). What they all have in common is this seemingly smart, sophisticated, and scientific, but in fact spectacularly delusional attitude that things can be planned and regulated on a society-wide (or even world-wide) scale by some supposedly scientific methods for maximizing various measures of aggregate welfare.
The most insane and dangerous of such ideas, namely the old-school economic central planning, is fortunately no longer widely popular (though a sizable part of the world had to be wrecked before its craziness finally became undeniable). The ones that are flourishing today are less destructive, at least in the short to medium run, but they are at the same time more difficult to counter, since the evidence of their failure is less obvious and easier to rationalize away. Unfortunately, here I would have to get into sensitive ideological issues to provide more concrete analysis and examples.
Isn’t that pretty much the entire question of political philosophy? There’s a reason politics is bad for rationality: it’s basically about hurting people.