I’d like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of “pretending to actually try.” People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.
As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you’re bringing up and have chosen to focus on other things. Big picture, I find claims like “your thing has problem X so you need to spend more resources on fixing X” more compelling when you point to things we’ve been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I’d be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I’ve considered doing so and agree this would be help address some of the issues you’ve identified. But I would welcome more of that kind of thing.
I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn’t say I settled all the issues, but I think we’d make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.
Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.
The main thing that I personally think we don’t need as much of is donations to object-level charities (e.g. GiveWell’s top picks). It’s unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...
I think in general, a case that “X is bad so we need more of fixing X” without specific recommendations can also be useful in that it leaves the resource allocation up to individual people. For instance, you decided that your current plans are better than spending more time on social-movement introspection, but (hopefully) not everyone who reads this post will come to the same conclusion.
I think “writing blogposts criticizing mistakes that people in the EA community commonly make” is a moderate strawman of what I’d actually like to see, in that it gets us closer to being a successful movement but clearly won’t be sufficient on its own.
Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can’t come to nontrivial conclusions already, the kind of facts we’re likely to find won’t help very much.
Given these considerations, it’s quite surprising that effective altruists are donating to global health causes now. Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides.
Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides.
Still not so sure. Legibility and inferential distance are major constraints here. When trying to explain earning to give it’s much easier if the “give” part is something obviously good. Donor-advised funds combined with an intention to choose effective charities aren’t “obviously good” in the same way as a donation to a charity.
The main thing that I personally think we don’t need as much of is donations to object-level charities (e.g. GiveWell’s top picks). It’s unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...
I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I’m pretty on board here.
I think “writing blogposts criticizing mistakes that people in the EA community commonly make” is a moderate strawman of what I’d actually like to see, in that it gets us closer to being a successful movement but clearly won’t be sufficient on its own.
That was my first pass at how I’d try to start to try to increase the “self-awareness” of the movement. I would be interested in hearing more specifics about what you’d like to see happen.
Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can’t come to nontrivial conclusions already, the kind of facts we’re likely to find won’t help very much.
A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people’s ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.
Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it’s easier for us to tell if we’re making progress, we’ll learn how to learn about these issues more quickly.
I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By “the basics” I mean stuff like “who is working on synthetic biology?” in contrast with stuff like “what’s the right theory of population ethics?”.
You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.
I’d like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of “pretending to actually try.” People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.
As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you’re bringing up and have chosen to focus on other things. Big picture, I find claims like “your thing has problem X so you need to spend more resources on fixing X” more compelling when you point to things we’ve been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I’d be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I’ve considered doing so and agree this would be help address some of the issues you’ve identified. But I would welcome more of that kind of thing.
I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn’t say I settled all the issues, but I think we’d make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.
Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.
The main thing that I personally think we don’t need as much of is donations to object-level charities (e.g. GiveWell’s top picks). It’s unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...
I think in general, a case that “X is bad so we need more of fixing X” without specific recommendations can also be useful in that it leaves the resource allocation up to individual people. For instance, you decided that your current plans are better than spending more time on social-movement introspection, but (hopefully) not everyone who reads this post will come to the same conclusion.
I think “writing blogposts criticizing mistakes that people in the EA community commonly make” is a moderate strawman of what I’d actually like to see, in that it gets us closer to being a successful movement but clearly won’t be sufficient on its own.
Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can’t come to nontrivial conclusions already, the kind of facts we’re likely to find won’t help very much.
These donations are useful for establishing credibility as a real movement and not just “people talking on the internet”.
Yes, I’m well aware. I never said they were completely unuseful, just that IMO the marginal value is lower than resources spent elsewhere.
Also, as Ben notes,
Still not so sure. Legibility and inferential distance are major constraints here. When trying to explain earning to give it’s much easier if the “give” part is something obviously good. Donor-advised funds combined with an intention to choose effective charities aren’t “obviously good” in the same way as a donation to a charity.
I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I’m pretty on board here.
That was my first pass at how I’d try to start to try to increase the “self-awareness” of the movement. I would be interested in hearing more specifics about what you’d like to see happen.
A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people’s ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.
Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it’s easier for us to tell if we’re making progress, we’ll learn how to learn about these issues more quickly.
I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By “the basics” I mean stuff like “who is working on synthetic biology?” in contrast with stuff like “what’s the right theory of population ethics?”.
You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.