Karma for the post is relatively low, and a lot of comments, including the top-rated, can be summarized as “Fun idea, but too crazy to even consider.”
To be clear, the ideas in question are to establish charities to:
breed rats and then pamper those rats so as to increase the amount of happiness in the world
research insecticides that kill insects in nicer ways
I think that there are legitimate, rational reasons to reject these ideas. I think that you are being uncharitable by assuming that those who responded negatively to those ideas are closed-minded; not every idea is worth spending much time considering.
Those ideas are perfectly rational, given EA premises about maximizing all utility (and the belief that animals have utility). It’s just that they’re weird conclusions because they are based on weird premises..
Most people would when they encounter such weird conclusions, begin to question the premises, not let themselves get led to their doom. It’s possible to bite the bullet too much.
The problem is that “utility” is supposed to stand for what I care about. I don’t care about happy rats or happy insects. That is why I am against that kind of project. That is also why eating meat does not bother me, even though I am pretty sure that pigs and cows can and do suffer. I might prefer that they not suffer, other things being equal, but my concern about that is tiny compared to how much I care about humans.
If utility stands for what you care about, everyone is a utilitarian by definition. Even if you only care about yourself, that just means that your utility function gives great weight to your preferences and no weight to anyone else’s.
“Utilitarian” doesn’t mean “acting according to a utility function”. Further, many people’s actions are really difficult to express in terms of a utility function, and in order even to try you need to do things like making it change a lot over time and depend heavily on the actions and/or character of the person who’s utility function it’s supposed to be.
I’m not (I think) saying that to disagree with you; if I’m understanding correctly your first sentence is intended as a sort of reductio ad absurdum of entirelyuseless’s comment. But, if so, I am saying the following to disagree with you: I think it is perfectly possible to be basically utilitarian and think animal suffering matters, without finding it likely that happy rat farms and humane insecticides are an effective way to maximize utility. And so far as I know, values of the sort you need to hold that position are quite common among basically-utilitarian people and quite common among people who identify as EAs.
Most people would when they encounter such weird conclusions, begin to question the premises, not let themselves get led to their doom. It’s possible to bite the bullet too much.
Great point. It is like the old saying goes:
that which is one person’s modus ponens is another person’s modus tollens
ETA:
However, none of this is an indictment of EA—one can believe in the principles of EA without also being a strict hedonistic utilitarian. The weird conclusions follow from utilitarianism rather than from EA.
To be clear, the ideas in question are to establish charities to:
breed rats and then pamper those rats so as to increase the amount of happiness in the world
research insecticides that kill insects in nicer ways
I think that there are legitimate, rational reasons to reject these ideas. I think that you are being uncharitable by assuming that those who responded negatively to those ideas are closed-minded; not every idea is worth spending much time considering.
Those ideas are perfectly rational, given EA premises about maximizing all utility (and the belief that animals have utility). It’s just that they’re weird conclusions because they are based on weird premises..
Most people would when they encounter such weird conclusions, begin to question the premises, not let themselves get led to their doom. It’s possible to bite the bullet too much.
The problem is that “utility” is supposed to stand for what I care about. I don’t care about happy rats or happy insects. That is why I am against that kind of project. That is also why eating meat does not bother me, even though I am pretty sure that pigs and cows can and do suffer. I might prefer that they not suffer, other things being equal, but my concern about that is tiny compared to how much I care about humans.
You might not care about happy rats but a sizable number of EA’s care about animal suffering.
If utility stands for what you care about, everyone is a utilitarian by definition. Even if you only care about yourself, that just means that your utility function gives great weight to your preferences and no weight to anyone else’s.
“Utilitarian” doesn’t mean “acting according to a utility function”. Further, many people’s actions are really difficult to express in terms of a utility function, and in order even to try you need to do things like making it change a lot over time and depend heavily on the actions and/or character of the person who’s utility function it’s supposed to be.
I’m not (I think) saying that to disagree with you; if I’m understanding correctly your first sentence is intended as a sort of reductio ad absurdum of entirelyuseless’s comment. But, if so, I am saying the following to disagree with you: I think it is perfectly possible to be basically utilitarian and think animal suffering matters, without finding it likely that happy rat farms and humane insecticides are an effective way to maximize utility. And so far as I know, values of the sort you need to hold that position are quite common among basically-utilitarian people and quite common among people who identify as EAs.
Great point. It is like the old saying goes:
ETA:
However, none of this is an indictment of EA—one can believe in the principles of EA without also being a strict hedonistic utilitarian. The weird conclusions follow from utilitarianism rather than from EA.