“Drifting from rationality”: What’s your problem with the post you link to? It seems to me it’s simply pointing out that not everyone is a utilitarian, and that whether someone is a utilitarian is a matter of values as well as rationality. What’s wrong with that?
“Closed-minded”: the reaction to that post looks pretty positive to me. (And the post is pretty strange. It proposes creating rat farms filled with very happy rats as a utility-generating tool, and researching insecticides that kill insects in nicer ways.)
“Overly-optimistic”: that post predicts a 5% chance that within 20 years the whole EA movement might be as big as the Gates Foundation. Do you really find that unreasonable?
I do agree about the fourth link—but I don’t think it’s representative, and if you look at reactions on LW to the same author’s posts here, you’ll see that you’re far from the only person who dislikes his style.
From the post, “This means that we need to start by spreading our values, before talking about implementation.” Splitting the difficult “effective” part from the easy “altruism” part this early in the movement is troubling. The path of least resistance is tempting.
Closed-minded
Karma for the post is relatively low, and a lot of comments, including the top-rated, can be summarized as “Fun idea, but too crazy to even consider.”
Overly-optimistic
The post glosses over the time value of money/charitable donations and the GWWC member quit rate, so I think it’s reasonable to say that the Gates Foundation will almost definitely have moved more time value-adjusted money than that of GWWC’s members over the next twenty years. Therefore, speculating that GWWC could be a “big deal” comparable to the Gates Foundation in this time frame is overly-optimistic. Still disagree? Let’s settle on a discount rate and structure a hypothetical bet, I’ll give you better than 20-1 odds.
Self-congratulatory
I don’t actually believe this is a big problem in itself, but if the other problems exist it seems like this would exacerbate them.
Karma for the post is relatively low, and a lot of comments, including the top-rated, can be summarized as “Fun idea, but too crazy to even consider.”
To be clear, the ideas in question are to establish charities to:
breed rats and then pamper those rats so as to increase the amount of happiness in the world
research insecticides that kill insects in nicer ways
I think that there are legitimate, rational reasons to reject these ideas. I think that you are being uncharitable by assuming that those who responded negatively to those ideas are closed-minded; not every idea is worth spending much time considering.
Those ideas are perfectly rational, given EA premises about maximizing all utility (and the belief that animals have utility). It’s just that they’re weird conclusions because they are based on weird premises..
Most people would when they encounter such weird conclusions, begin to question the premises, not let themselves get led to their doom. It’s possible to bite the bullet too much.
The problem is that “utility” is supposed to stand for what I care about. I don’t care about happy rats or happy insects. That is why I am against that kind of project. That is also why eating meat does not bother me, even though I am pretty sure that pigs and cows can and do suffer. I might prefer that they not suffer, other things being equal, but my concern about that is tiny compared to how much I care about humans.
If utility stands for what you care about, everyone is a utilitarian by definition. Even if you only care about yourself, that just means that your utility function gives great weight to your preferences and no weight to anyone else’s.
“Utilitarian” doesn’t mean “acting according to a utility function”. Further, many people’s actions are really difficult to express in terms of a utility function, and in order even to try you need to do things like making it change a lot over time and depend heavily on the actions and/or character of the person who’s utility function it’s supposed to be.
I’m not (I think) saying that to disagree with you; if I’m understanding correctly your first sentence is intended as a sort of reductio ad absurdum of entirelyuseless’s comment. But, if so, I am saying the following to disagree with you: I think it is perfectly possible to be basically utilitarian and think animal suffering matters, without finding it likely that happy rat farms and humane insecticides are an effective way to maximize utility. And so far as I know, values of the sort you need to hold that position are quite common among basically-utilitarian people and quite common among people who identify as EAs.
Most people would when they encounter such weird conclusions, begin to question the premises, not let themselves get led to their doom. It’s possible to bite the bullet too much.
Great point. It is like the old saying goes:
that which is one person’s modus ponens is another person’s modus tollens
ETA:
However, none of this is an indictment of EA—one can believe in the principles of EA without also being a strict hedonistic utilitarian. The weird conclusions follow from utilitarianism rather than from EA.
Karma for the post is relatively low, and a lot of comments, including the top-rated, can be summarized as “Fun idea, but too crazy to even consider.”
If a net positive reception is the best example you can bring of EA being close-minded it seems to me that anybody who hasn’t looked into the issue of whether EA is open-minded should update in the direction of EA being more open-minded than their priors suggest.
The post argues that the most effective way to achieve EA goals is to prioritize spreading EA-ish values over making arguments that will appeal only to people whose values are already EA-ish. I don’t know whether that’s correct, but I fail to see how figuring out what’s most effective and don’t it could be an abandonment of rationality in any sense that’s relevant here. Taking the path of least resistance—i.e., seeking maximum good done per unit cost—is pretty much the core of what EA is about, no?
Karma for the post is relatively low
OK. Inevitably some posts will have relatively low karma. On what grounds do you think this shouldn’t have been one of them?
moved more time value-adjusted money [...] over the next twenty years
I don’t think that’s at all what the post was assigning a 5% probability to.
I’m puzzled by most of your links.
“Drifting from rationality”: What’s your problem with the post you link to? It seems to me it’s simply pointing out that not everyone is a utilitarian, and that whether someone is a utilitarian is a matter of values as well as rationality. What’s wrong with that?
“Closed-minded”: the reaction to that post looks pretty positive to me. (And the post is pretty strange. It proposes creating rat farms filled with very happy rats as a utility-generating tool, and researching insecticides that kill insects in nicer ways.)
“Overly-optimistic”: that post predicts a 5% chance that within 20 years the whole EA movement might be as big as the Gates Foundation. Do you really find that unreasonable?
I do agree about the fourth link—but I don’t think it’s representative, and if you look at reactions on LW to the same author’s posts here, you’ll see that you’re far from the only person who dislikes his style.
Drifting from rationality
From the post, “This means that we need to start by spreading our values, before talking about implementation.” Splitting the difficult “effective” part from the easy “altruism” part this early in the movement is troubling. The path of least resistance is tempting.
Closed-minded
Karma for the post is relatively low, and a lot of comments, including the top-rated, can be summarized as “Fun idea, but too crazy to even consider.”
Overly-optimistic
The post glosses over the time value of money/charitable donations and the GWWC member quit rate, so I think it’s reasonable to say that the Gates Foundation will almost definitely have moved more time value-adjusted money than that of GWWC’s members over the next twenty years. Therefore, speculating that GWWC could be a “big deal” comparable to the Gates Foundation in this time frame is overly-optimistic. Still disagree? Let’s settle on a discount rate and structure a hypothetical bet, I’ll give you better than 20-1 odds.
Self-congratulatory
I don’t actually believe this is a big problem in itself, but if the other problems exist it seems like this would exacerbate them.
To be clear, the ideas in question are to establish charities to:
breed rats and then pamper those rats so as to increase the amount of happiness in the world
research insecticides that kill insects in nicer ways
I think that there are legitimate, rational reasons to reject these ideas. I think that you are being uncharitable by assuming that those who responded negatively to those ideas are closed-minded; not every idea is worth spending much time considering.
Those ideas are perfectly rational, given EA premises about maximizing all utility (and the belief that animals have utility). It’s just that they’re weird conclusions because they are based on weird premises..
Most people would when they encounter such weird conclusions, begin to question the premises, not let themselves get led to their doom. It’s possible to bite the bullet too much.
The problem is that “utility” is supposed to stand for what I care about. I don’t care about happy rats or happy insects. That is why I am against that kind of project. That is also why eating meat does not bother me, even though I am pretty sure that pigs and cows can and do suffer. I might prefer that they not suffer, other things being equal, but my concern about that is tiny compared to how much I care about humans.
You might not care about happy rats but a sizable number of EA’s care about animal suffering.
If utility stands for what you care about, everyone is a utilitarian by definition. Even if you only care about yourself, that just means that your utility function gives great weight to your preferences and no weight to anyone else’s.
“Utilitarian” doesn’t mean “acting according to a utility function”. Further, many people’s actions are really difficult to express in terms of a utility function, and in order even to try you need to do things like making it change a lot over time and depend heavily on the actions and/or character of the person who’s utility function it’s supposed to be.
I’m not (I think) saying that to disagree with you; if I’m understanding correctly your first sentence is intended as a sort of reductio ad absurdum of entirelyuseless’s comment. But, if so, I am saying the following to disagree with you: I think it is perfectly possible to be basically utilitarian and think animal suffering matters, without finding it likely that happy rat farms and humane insecticides are an effective way to maximize utility. And so far as I know, values of the sort you need to hold that position are quite common among basically-utilitarian people and quite common among people who identify as EAs.
Great point. It is like the old saying goes:
ETA:
However, none of this is an indictment of EA—one can believe in the principles of EA without also being a strict hedonistic utilitarian. The weird conclusions follow from utilitarianism rather than from EA.
If a net positive reception is the best example you can bring of EA being close-minded it seems to me that anybody who hasn’t looked into the issue of whether EA is open-minded should update in the direction of EA being more open-minded than their priors suggest.
The post argues that the most effective way to achieve EA goals is to prioritize spreading EA-ish values over making arguments that will appeal only to people whose values are already EA-ish. I don’t know whether that’s correct, but I fail to see how figuring out what’s most effective and don’t it could be an abandonment of rationality in any sense that’s relevant here. Taking the path of least resistance—i.e., seeking maximum good done per unit cost—is pretty much the core of what EA is about, no?
OK. Inevitably some posts will have relatively low karma. On what grounds do you think this shouldn’t have been one of them?
I don’t think that’s at all what the post was assigning a 5% probability to.