Any kind of utilitarianism entails every statement of the form “p, if it results in measurably maximized utility” (kinds of utilitarianism differ in what they mean by “maximized utility”, since the phrase itself is underspecified), and I find it a bit disingenuous to instantiate p in a way that people wouldn’t like in order to defame its proponents instead of saying straightforwardly that you just don’t agree with utilitarianism.
Which is quite a different question from that of whether a given p does, in fact, result in maximized utility. Not idea if the above one does. So cousin_it’s question makes perfect sense: does p in fact result in maximized utility? Because if it doesn’t, then the blogger’s statement is even more disingenuous.
I don’t think that gets at the core of the criticism.
I think the position is:
“You shouldn’t be allowed to argue that policy X is good in the abstract scenario A if policy is is dangerous in the world in which you are living B and the fact that you argue that X is good in A increases the chances that X will be adopted in B.”
I’d suggest unpacking that “shouldn’t be allowed”.
To me, it reads something like:
“Let’s say that in abstract scenario S, policy X sounds like a utility-maximizing proposal; but in the world we’re living, policy X would hurt our neighbors A, B, and C. If we spend our social time chatting about policy X and how great it would be, and chide people who criticize policy X that they are not being good utility maximizers, we should predict that A, B, and C will see us as a threat to their well-being.”
That last bit is the part I think a lot of this discussion is missing.
I’d suggest unpacking that “shouldn’t be allowed”.
I do think that apophenia calls for community rules that constitute “safety belts” with limit what people can say. I would highly predict that they would favor a policy for lesswrong where lesswrong moderators would delete posts that make such arguments.
But you are right, the part about neighbors also matters.
Any kind of utilitarianism entails every statement of the form “p, if it results in measurably maximized utility” (kinds of utilitarianism differ in what they mean by “maximized utility”, since the phrase itself is underspecified), and I find it a bit disingenuous to instantiate p in a way that people wouldn’t like in order to defame its proponents instead of saying straightforwardly that you just don’t agree with utilitarianism.
Which is quite a different question from that of whether a given p does, in fact, result in maximized utility. Not idea if the above one does. So cousin_it’s question makes perfect sense: does p in fact result in maximized utility? Because if it doesn’t, then the blogger’s statement is even more disingenuous.
I don’t think that gets at the core of the criticism.
I think the position is: “You shouldn’t be allowed to argue that policy X is good in the abstract scenario A if policy is is dangerous in the world in which you are living B and the fact that you argue that X is good in A increases the chances that X will be adopted in B.”
I’d suggest unpacking that “shouldn’t be allowed”.
To me, it reads something like:
“Let’s say that in abstract scenario S, policy X sounds like a utility-maximizing proposal; but in the world we’re living, policy X would hurt our neighbors A, B, and C. If we spend our social time chatting about policy X and how great it would be, and chide people who criticize policy X that they are not being good utility maximizers, we should predict that A, B, and C will see us as a threat to their well-being.”
That last bit is the part I think a lot of this discussion is missing.
I do think that apophenia calls for community rules that constitute “safety belts” with limit what people can say. I would highly predict that they would favor a policy for lesswrong where lesswrong moderators would delete posts that make such arguments.
But you are right, the part about neighbors also matters.