I know many EAs and consider many of them friends, but I do not centrally view the world in EA terms, or share the EA moral or ethical frameworks. I don’t use what seem to for all practical purposes be their decision theories. I have very large, very deep, very central disagreements with EA and its core components and central organizations and modes of operation. I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system. I worry that this does intense psychological, epistemic and life experiential damage to many EAs.
Some of that I’ll gesture at or somewhat discuss here, and some of it I won’t. I’m not trying to justify all of my concerns here, I’m trying to share thoughts. If and when I have time in the future, I hope to write something shorter that is better justified.
I also want to make something else clear, for all my disagreements with and worries about it: These criticisms of Effective Altruism are comparing it to what it can and should be, and what it needs to be to accomplish its nigh-impossible tasks, rather than comparing it to popular alternatives.
If you read my Moral Mazes sequence, you’ll see how perversely I view most of what many people do most days. I critique here in such detail because, despite all our disagreements and my worries, I love and I care.
I appreciate that you flagged the criticism of EA as being relative to the standard of being able to achieve very difficult tasks. I still think that, when applying very high standards (and concomitantly strong language), it’s worth being more careful about the ways in which this predictably biases people’s judgements and makes discussion worse. E.g. I have a hard time mentally conceptualising something as being “deeply, deeply wrong” and “horrifying”, but also unusually good compared to alternatives; the former crowds out the latter for me, and I suspect many other readers.
I want, as much as possible, to get away from the question of whether ‘EA is good’ or ‘EA is bad’ to various extents. I made an effort to focus on sharing information, rather than telling people what conclusions or affects to take away from it.
What I am saying in the quoted text is that I believe there are specific things within EA that are deeply wrong. This is not at all a conflict with EA being unusually good.
I’m also saying wrong as in mistaken, and I’m definitely (this is me responding to the linked comment’s complaint) not intending on throwing around words like ‘evil’ or at least didn’t do it on purpose, and was trying to avoid making moral claims at all let alone non-consequentialist ones, although I am noting that I have strong moral framework disagreements.
For concrete clean non-EA example, one could say: The NFL is exceptional, but there is something deeply, deeply wrong with the way it deals with the problem of concussions. And I could want badly for them to fix their concussion protocols or safety equipment, and still think the NFL was pretty great.
And I do agree that there will be people who then say “So why do you hate the NFL?” (or “How can you not hate the NFL?”) but we need to be better than that, ideally everywhere, but at least here.
(Similarly, the political problem when someone says “I love my country, but X” or someone else says “How can you love your country when it does X”)
I do agree that these issues can be difficult, but if this kind of extraordinary effort (flagging the standard in bold text in a clearly sympathetic way, being careful to avoid moral claims and rather sharing intuitions, models and facts, letting the reader draw their own implications on all levels from the information rather than telling them what to conclude) isn’t good enough, than I’m confused what the alternative is that still communicates the information at all.
It seems worthwhile to break down exactly what the detailed references here are, so I’ll also tackle the other example you referred to. Of course this was a giant post written fast, so this is unpacking largely unconscious/background thinking, but that’s a lot of what one thinks and is still highly useful.
You refer to “horrifying” so I want to quote that passage:
To the extent one thinks any or all of that is wrongheaded or broken, one would take issue with the process and its decisions, especially the resulting grants which ended up giving the majority of the funds distributed to explicitly EA-branded organizations.
From many of the valid alternative perspectives that do think such things about EA as it exists in practice, being unusually virtuous in executing the framework here doesn’t make the goings on much less horrifying. I get that.
Here I was attempting to speak to and acknowledge those who do actually find this horrifying, and to point out that there are frameworks of thinking where EA really is doing something that would qualify as horrifying, and that this wasn’t in conflict with the idea that what we found in SFF was in many ways an unusually virtuous execution of the thing, and that they should update positively when they notice this, despite it not being better in consequentialist terms given the rest of their framework. I know multiple people who are indeed horrified here.
I wanted people to be able to take in the information no matter their prior perspectives, and also for everyone to notice that the assumption that “EA = good” is a hidden assumption and that if it goes away a lot of other things fall away too.
What I didn’t say, is the claim that anything actually is horrifying here in any objective sense, or even that I was horrified. On reflection I am horrified about the ‘seek power and money’ dynamics and people’s failure to notice the skulls there, but that’s not a bid to get everyone else to be horrified.
I think your other comment has merit in that deontological/moral language has great ability to distract, so it should be used carefully, it’s better where possible to say explicitly exactly what is happening, but also there are times when it’s the only reasonable way to convey information, and trade-offs are a thing, and there’s of course a time and a place to advocate for one’s moral framework, or where it’s important to share one’s moral framework as context—e.g. when Ben says “I realized Facebook was evil” he is sharing information about his frameworks, thinking and mental state, that it would be very difficult to convey without the word evil. Ben could instead try to say “I realized Facebook was having a net harmful impact on its users much larger than the surplus it was able to extract, and that it would be decision theoretically correct to avoid giving it power or interacting with it” or something but that both is way longer and uglier and also really, really, really doesn’t convey the same core information Ben wants to convey.
There’s also times when the literally accurate word simply has negative connotations because negative things have negative connotations. Thus, if someone or some system systematically “says that which is not” in order to extract resources from others, whereas if “saying that which is not” would not allow the extraction of resources it would have instead said that which is, it seems reasonable to say that this person or system is lying, and that this pattern of lying may be a problem. If you say this is technically correct but you don’t like the impression, it has a bad connotation, I mean… what are you suggesting exactly?
Similarly, you object to the use of the word ‘attack’ and I assumed you were referring to the SSC/NYT thing, and I was prepared to defend that usage, but then I looked and the link is to my post on Slack? And I notice I am confused?
The word ‘attack’ there, in a post that’s clearly using artistic flourish and also talking about abstractions, is used in two places.
“You Can Afford It”
People like to tell you, “You can afford it.”
No, you can’t. This is the most famous attack on Slack.
Yes, this is literally an attack. It is an attempt to extract resources from a target through use of rhetoric, to convince them to not value something which is valuable. And I don’t even see what the bad connotations are here. Are you simply saying that the use of the word ‘attack’ is inherently bad?
Here’s the other:
Out to Get You and the Attack on Slack
Many things in this world are Out to Get You. Often they are Out to Get You for a lot, usually but not always your time, attention and money.
Again, I am confused what your objection is here, unless it’s something like ‘rhetorical flourish is never allowed’ (or something more asymmetric and less charitable than that, or some sort of superweapon against any effective rhetoric).
Similarly, you object to “war” in “Blackmailers are privateers in the war against hypocrisy.” This is a post of Benquo’s I happen to actively and strongly disagree with in its central point, but seriously, what exactly is the issue with the word ‘war’ here? That metaphor is considered harmful? I don’t see this as in any way distracting or distorting, as far as I can tell it’s the best way to convey the viewpoint the post is advocating for, and I’m curious how you would propose to do better. Now I happen to think the viewpoint here is very wrong, but that doesn’t mean it shouldn’t get to use such techniques to convey its ideas.
To give you an idea of where I am sympathetic, I do think the use of the word ‘scam’ brings more heat than light in many cases, even when it is technically correct, there are other ways to convey the information that work better, and so I make sure to only pull the word ‘scam’ (or ‘fraud’) out when I really mean it.
I appreciate that you flagged the criticism of EA as being relative to the standard of being able to achieve very difficult tasks. I still think that, when applying very high standards (and concomitantly strong language), it’s worth being more careful about the ways in which this predictably biases people’s judgements and makes discussion worse. E.g. I have a hard time mentally conceptualising something as being “deeply, deeply wrong” and “horrifying”, but also unusually good compared to alternatives; the former crowds out the latter for me, and I suspect many other readers.
More arguments/elaboration in this comment.
I want, as much as possible, to get away from the question of whether ‘EA is good’ or ‘EA is bad’ to various extents. I made an effort to focus on sharing information, rather than telling people what conclusions or affects to take away from it.
What I am saying in the quoted text is that I believe there are specific things within EA that are deeply wrong. This is not at all a conflict with EA being unusually good.
I’m also saying wrong as in mistaken, and I’m definitely (this is me responding to the linked comment’s complaint) not intending on throwing around words like ‘evil’ or at least didn’t do it on purpose, and was trying to avoid making moral claims at all let alone non-consequentialist ones, although I am noting that I have strong moral framework disagreements.
For concrete clean non-EA example, one could say: The NFL is exceptional, but there is something deeply, deeply wrong with the way it deals with the problem of concussions. And I could want badly for them to fix their concussion protocols or safety equipment, and still think the NFL was pretty great.
And I do agree that there will be people who then say “So why do you hate the NFL?” (or “How can you not hate the NFL?”) but we need to be better than that, ideally everywhere, but at least here.
(Similarly, the political problem when someone says “I love my country, but X” or someone else says “How can you love your country when it does X”)
I do agree that these issues can be difficult, but if this kind of extraordinary effort (flagging the standard in bold text in a clearly sympathetic way, being careful to avoid moral claims and rather sharing intuitions, models and facts, letting the reader draw their own implications on all levels from the information rather than telling them what to conclude) isn’t good enough, than I’m confused what the alternative is that still communicates the information at all.
It seems worthwhile to break down exactly what the detailed references here are, so I’ll also tackle the other example you referred to. Of course this was a giant post written fast, so this is unpacking largely unconscious/background thinking, but that’s a lot of what one thinks and is still highly useful.
You refer to “horrifying” so I want to quote that passage:
Here I was attempting to speak to and acknowledge those who do actually find this horrifying, and to point out that there are frameworks of thinking where EA really is doing something that would qualify as horrifying, and that this wasn’t in conflict with the idea that what we found in SFF was in many ways an unusually virtuous execution of the thing, and that they should update positively when they notice this, despite it not being better in consequentialist terms given the rest of their framework. I know multiple people who are indeed horrified here.
I wanted people to be able to take in the information no matter their prior perspectives, and also for everyone to notice that the assumption that “EA = good” is a hidden assumption and that if it goes away a lot of other things fall away too.
What I didn’t say, is the claim that anything actually is horrifying here in any objective sense, or even that I was horrified. On reflection I am horrified about the ‘seek power and money’ dynamics and people’s failure to notice the skulls there, but that’s not a bid to get everyone else to be horrified.
I think your other comment has merit in that deontological/moral language has great ability to distract, so it should be used carefully, it’s better where possible to say explicitly exactly what is happening, but also there are times when it’s the only reasonable way to convey information, and trade-offs are a thing, and there’s of course a time and a place to advocate for one’s moral framework, or where it’s important to share one’s moral framework as context—e.g. when Ben says “I realized Facebook was evil” he is sharing information about his frameworks, thinking and mental state, that it would be very difficult to convey without the word evil. Ben could instead try to say “I realized Facebook was having a net harmful impact on its users much larger than the surplus it was able to extract, and that it would be decision theoretically correct to avoid giving it power or interacting with it” or something but that both is way longer and uglier and also really, really, really doesn’t convey the same core information Ben wants to convey.
There’s also times when the literally accurate word simply has negative connotations because negative things have negative connotations. Thus, if someone or some system systematically “says that which is not” in order to extract resources from others, whereas if “saying that which is not” would not allow the extraction of resources it would have instead said that which is, it seems reasonable to say that this person or system is lying, and that this pattern of lying may be a problem. If you say this is technically correct but you don’t like the impression, it has a bad connotation, I mean… what are you suggesting exactly?
Similarly, you object to the use of the word ‘attack’ and I assumed you were referring to the SSC/NYT thing, and I was prepared to defend that usage, but then I looked and the link is to my post on Slack? And I notice I am confused?
The word ‘attack’ there, in a post that’s clearly using artistic flourish and also talking about abstractions, is used in two places.
Yes, this is literally an attack. It is an attempt to extract resources from a target through use of rhetoric, to convince them to not value something which is valuable. And I don’t even see what the bad connotations are here. Are you simply saying that the use of the word ‘attack’ is inherently bad?
Here’s the other:
Again, I am confused what your objection is here, unless it’s something like ‘rhetorical flourish is never allowed’ (or something more asymmetric and less charitable than that, or some sort of superweapon against any effective rhetoric).
Similarly, you object to “war” in “Blackmailers are privateers in the war against hypocrisy.” This is a post of Benquo’s I happen to actively and strongly disagree with in its central point, but seriously, what exactly is the issue with the word ‘war’ here? That metaphor is considered harmful? I don’t see this as in any way distracting or distorting, as far as I can tell it’s the best way to convey the viewpoint the post is advocating for, and I’m curious how you would propose to do better. Now I happen to think the viewpoint here is very wrong, but that doesn’t mean it shouldn’t get to use such techniques to convey its ideas.
To give you an idea of where I am sympathetic, I do think the use of the word ‘scam’ brings more heat than light in many cases, even when it is technically correct, there are other ways to convey the information that work better, and so I make sure to only pull the word ‘scam’ (or ‘fraud’) out when I really mean it.