Saying “but you can’t bring hurt down to zero” is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.
You speak of “social welfare” as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen. (And even if such a definition could be agreed upon, there is still almost unlimited leeway to argue over how it could be best maximized, since we lack central planners with godlike powers.)
If we’re going to discuss reworking of the social contract, I prefer straight talk about who gets to have power and status, rather than attempts to obscure this question by talking in terms of some supposedly objective, but in fact entirely ghostlike, aggregate utilities at the level of the whole society.
Also, referring to the “usual problems with utilitarianism and social engineering” literally says that there are problems with utilitarianism and social engineering (which is true), but falsely implies that (a) utilitarianism has more or even as many problems as any other approach, and (b) that attempting to optimize for something is more like “social engineering” than other alternatives.
I’d probably word it a bit differently myself, but I think (a) and (b) are in fact true.
Saying “but you can’t bring hurt down to zero” is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.
You speak of “social welfare” as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen.
Note the position of “social welfare” in that sentence. It’s in a subordinate clause, describing a common behavior that I use as justification for taking special exception to something you said. So it’s two steps removed from what we’re arguing about. The important part of my sentence is the first part, “Saying ‘you can’t bring hurt down to zero’ is an invalid objection.” “Hurting people is bad” is not very controversial. You’re taking a minor, tangential subordinate clause, which is unimportant and not worth defending in this context, and replying as if you were objecting to my point.
I don’t mean that you’re trying to do this, but this is a classic Dark Arts technique—if your goal is to say “hurting people is bad” is controversial, you instead pick out something else in the same sentence that is controversial, and point that out.
I also didn’t mean to say that you are pernicious or have ill-intent—just that the objection I was replying to is one that upsets me because it is commonly used in a Dark Arts way.
I’d probably word it a bit differently myself, but I think (a) and (b) are in fact true.
Fair enough—it implies (a) and (b), whether true or false.
I say it isn’t theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases. A “non-utilitarian” approach just means an incomplete approach that leaves a mostly random set of possible cases unhandled, because it doesn’t produce a complete ordering of values over possible worlds. It’s like having a rule that’s missing most of the markings.
I say it isn’t theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases.
“Improved” is a tricky word here. If you’re discussing the position of an almighty god contemplating the universe, then yes, I agree. But when it comes to practical questions of human social order and coordination and arbitration of human interactions, the idea that such questions can be answered in practice by contemplating and maximizing some sort of universal welfare function, i.e some global aggregate utility, is awful hubris that is guaranteed to backfire in complete disaster—Hayek’s “fatal conceit,” if you will.
A fair point, but given the facts of the matter, I’d say that the qualification “guaranteed” needs to be toned down only slightly to make the utterance reasonably modest. (And since I’m writing on LW, I should perhaps be explicit that I’m not considering the hypothetical future appearance of some superhuman intelligence, but the regular human social life and organization.)
I think what’s going on is you’re getting annoyed by naive applications of utilitarian reasoning such as Yvain’s in the offense thread, then improperly generalizing that annoyance to even sophisticated applications.
On the contrary, it is the “sophisticated” applications that annoy me the most.
I don’t think it’s reasonable to get annoyed by people’s opinions expressed in purely intellectual debates such as those we have here, as long as they are argued politely, honestly, and intelligently. However, out there in the real world, among the people who wield power, influence, and status, there is a great deal of hubristic and pernicious utilitarian ideas, which are dangerous exactly because they have the public image of high status and sophistication. They go under all sorts of different monikers, and can be found in all major ideological camps (their distribution is of course not random, but let’s not go there). What they all have in common is this seemingly smart, sophisticated, and scientific, but in fact spectacularly delusional attitude that things can be planned and regulated on a society-wide (or even world-wide) scale by some supposedly scientific methods for maximizing various measures of aggregate welfare.
The most insane and dangerous of such ideas, namely the old-school economic central planning, is fortunately no longer widely popular (though a sizable part of the world had to be wrecked before its craziness finally became undeniable). The ones that are flourishing today are less destructive, at least in the short to medium run, but they are at the same time more difficult to counter, since the evidence of their failure is less obvious and easier to rationalize away. Unfortunately, here I would have to get into sensitive ideological issues to provide more concrete analysis and examples.
You speak of “social welfare” as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen. (And even if such a definition could be agreed upon, there is still almost unlimited leeway to argue over how it could be best maximized, since we lack central planners with godlike powers.)
If we’re going to discuss reworking of the social contract, I prefer straight talk about who gets to have power and status, rather than attempts to obscure this question by talking in terms of some supposedly objective, but in fact entirely ghostlike, aggregate utilities at the level of the whole society.
I’d probably word it a bit differently myself, but I think (a) and (b) are in fact true.
Note the position of “social welfare” in that sentence. It’s in a subordinate clause, describing a common behavior that I use as justification for taking special exception to something you said. So it’s two steps removed from what we’re arguing about. The important part of my sentence is the first part, “Saying ‘you can’t bring hurt down to zero’ is an invalid objection.” “Hurting people is bad” is not very controversial. You’re taking a minor, tangential subordinate clause, which is unimportant and not worth defending in this context, and replying as if you were objecting to my point.
I don’t mean that you’re trying to do this, but this is a classic Dark Arts technique—if your goal is to say “hurting people is bad” is controversial, you instead pick out something else in the same sentence that is controversial, and point that out.
I also didn’t mean to say that you are pernicious or have ill-intent—just that the objection I was replying to is one that upsets me because it is commonly used in a Dark Arts way.
Fair enough—it implies (a) and (b), whether true or false.
I say it isn’t theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases. A “non-utilitarian” approach just means an incomplete approach that leaves a mostly random set of possible cases unhandled, because it doesn’t produce a complete ordering of values over possible worlds. It’s like having a rule that’s missing most of the markings.
“Improved” is a tricky word here. If you’re discussing the position of an almighty god contemplating the universe, then yes, I agree. But when it comes to practical questions of human social order and coordination and arbitration of human interactions, the idea that such questions can be answered in practice by contemplating and maximizing some sort of universal welfare function, i.e some global aggregate utility, is awful hubris that is guaranteed to backfire in complete disaster—Hayek’s “fatal conceit,” if you will.
To a decent first approximation, you’re not allowed to use the words “hubris” and “guaranteed” in the same sentence.
A fair point, but given the facts of the matter, I’d say that the qualification “guaranteed” needs to be toned down only slightly to make the utterance reasonably modest. (And since I’m writing on LW, I should perhaps be explicit that I’m not considering the hypothetical future appearance of some superhuman intelligence, but the regular human social life and organization.)
I think what’s going on is you’re getting annoyed by naive applications of utilitarian reasoning such as Yvain’s in the offense thread, then improperly generalizing that annoyance to even sophisticated applications.
On the contrary, it is the “sophisticated” applications that annoy me the most.
I don’t think it’s reasonable to get annoyed by people’s opinions expressed in purely intellectual debates such as those we have here, as long as they are argued politely, honestly, and intelligently. However, out there in the real world, among the people who wield power, influence, and status, there is a great deal of hubristic and pernicious utilitarian ideas, which are dangerous exactly because they have the public image of high status and sophistication. They go under all sorts of different monikers, and can be found in all major ideological camps (their distribution is of course not random, but let’s not go there). What they all have in common is this seemingly smart, sophisticated, and scientific, but in fact spectacularly delusional attitude that things can be planned and regulated on a society-wide (or even world-wide) scale by some supposedly scientific methods for maximizing various measures of aggregate welfare.
The most insane and dangerous of such ideas, namely the old-school economic central planning, is fortunately no longer widely popular (though a sizable part of the world had to be wrecked before its craziness finally became undeniable). The ones that are flourishing today are less destructive, at least in the short to medium run, but they are at the same time more difficult to counter, since the evidence of their failure is less obvious and easier to rationalize away. Unfortunately, here I would have to get into sensitive ideological issues to provide more concrete analysis and examples.