I want, as much as possible, to get away from the question of whether ‘EA is good’ or ‘EA is bad’ to various extents. I made an effort to focus on sharing information, rather than telling people what conclusions or affects to take away from it.
What I am saying in the quoted text is that I believe there are specific things within EA that are deeply wrong. This is not at all a conflict with EA being unusually good.
I’m also saying wrong as in mistaken, and I’m definitely (this is me responding to the linked comment’s complaint) not intending on throwing around words like ‘evil’ or at least didn’t do it on purpose, and was trying to avoid making moral claims at all let alone non-consequentialist ones, although I am noting that I have strong moral framework disagreements.
For concrete clean non-EA example, one could say: The NFL is exceptional, but there is something deeply, deeply wrong with the way it deals with the problem of concussions. And I could want badly for them to fix their concussion protocols or safety equipment, and still think the NFL was pretty great.
And I do agree that there will be people who then say “So why do you hate the NFL?” (or “How can you not hate the NFL?”) but we need to be better than that, ideally everywhere, but at least here.
(Similarly, the political problem when someone says “I love my country, but X” or someone else says “How can you love your country when it does X”)
I do agree that these issues can be difficult, but if this kind of extraordinary effort (flagging the standard in bold text in a clearly sympathetic way, being careful to avoid moral claims and rather sharing intuitions, models and facts, letting the reader draw their own implications on all levels from the information rather than telling them what to conclude) isn’t good enough, than I’m confused what the alternative is that still communicates the information at all.
I want, as much as possible, to get away from the question of whether ‘EA is good’ or ‘EA is bad’ to various extents. I made an effort to focus on sharing information, rather than telling people what conclusions or affects to take away from it.
What I am saying in the quoted text is that I believe there are specific things within EA that are deeply wrong. This is not at all a conflict with EA being unusually good.
I’m also saying wrong as in mistaken, and I’m definitely (this is me responding to the linked comment’s complaint) not intending on throwing around words like ‘evil’ or at least didn’t do it on purpose, and was trying to avoid making moral claims at all let alone non-consequentialist ones, although I am noting that I have strong moral framework disagreements.
For concrete clean non-EA example, one could say: The NFL is exceptional, but there is something deeply, deeply wrong with the way it deals with the problem of concussions. And I could want badly for them to fix their concussion protocols or safety equipment, and still think the NFL was pretty great.
And I do agree that there will be people who then say “So why do you hate the NFL?” (or “How can you not hate the NFL?”) but we need to be better than that, ideally everywhere, but at least here.
(Similarly, the political problem when someone says “I love my country, but X” or someone else says “How can you love your country when it does X”)
I do agree that these issues can be difficult, but if this kind of extraordinary effort (flagging the standard in bold text in a clearly sympathetic way, being careful to avoid moral claims and rather sharing intuitions, models and facts, letting the reader draw their own implications on all levels from the information rather than telling them what to conclude) isn’t good enough, than I’m confused what the alternative is that still communicates the information at all.