As a practicing socialist, I found the comparison to Communism illuminating and somewhat disturbing.
You’ve already listed some of the major, obvious aspects in which the Effective Altruism movement resembles Communism. Let me add another: failure to take account of local information and preferences.
Information: Communism (or as the socialists say: state capitalism, or as the dictionaries say: state socialism—centrally planned economies!) failed horrifically at the Economic Calculation Problem because no central planning system composed of humans can take account of all the localized, personal information inherent in real lives. Markets, on the other hand, can take advantage of this information, even if they’re not always good at it (see for a chuckle: “Markets are Efficient iff P=NP”). Effective altruism, being centrally planned, suffers this problem.
Preferences: the other major failure of Communist central planning was its foolish claim that the entirety of society had a single, uniform set of valuations over economic inputs and outputs which was determined by the planning committee in the Politburo. The result was, of course, that the system produced vast amounts of things the Politburo thought were Very Important (such as weapons, to Smash the Evil Capitalists), and vast amounts of capital inputs (that sometimes sat uselessly because nobody really wanted them), but very, very small amounts of things most people actually preferred (like meat for household consumption).
Given, as you’ve mentioned, the overwhelmingly uniform and centrally-planned nature of the Effective Altruism movement, you should expect to suffer exactly the same systematic problems as Communism. My best recommendation to fix the problem is to come up with an optimization metric for Doing Good that doesn’t involve your movement’s having to personally know and plan all the facts and all the values of each altruistic intervention from the top down. Find a metric by which you can encourage the philanthrophic/charitable system to optimize itself from the bottom up, and then unleash it!
My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people’s values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.
I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn’t suffer from the “we’re figuring out what other people value” problem as much as other things, but I also think that that’s almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.
I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.
At a first reapproximation to my thinking a week ago, I was thinking of things like this. Many acts of “charity” often consist of trying to manage the lives of the unfortunate for them, and evidence is emerging that, well, the unfortunate know their own needs better than we do: we should empower them and leave them “free to optimize”, so to speak.
Not that malaria relief or anything is a bad cause, but I generally have more “feeling” regarding poverty myself, since combating poverty over the middle-term (longer than a year, shorter than a generation, let’s say) tends to result in the individual benefactors becoming able to solve a lot of their other problems, and has generational knock-on effects (such as: reduced poverty leads to better nutrition and better building materials, meaning healthier, smarter children over time, meaning people can do more to solve their remaining issues, etc.).
And then I was also definitely thinking about people trying to “do maximum good” through existential-risk reduction donations (including MIRI, but not just MIRI), and how these donations tend to be… dubiously effective. Sure, we’re not dead yet, but very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die. That is, if I want to be less-probably dead next year than this year, I don’t know to whom to donate.
EDIT: Regarding the latter paragraph, I wish to note that I did give MIRI $72 this past year, this being calculated as the equivalent price of several Harry Potter novels for which the author deserved payment. If I become convinced that MIRI/FHI are actually effective in ensuring both that AI doesn’t kill us all off, and that they can do better than throw the human species in a permanent Medieval Stasis (ie: that they can “save the world”), resulting in the much-lauded futuristic utopia they use for their recruiting pitches, I will donate larger sums quite willingly. I also want to actually engage in the scientific/philosophical problems involved myself, just to be damn sure. So don’t think I’m insulting here, just pointing out that “we’re the only ones thinking about AI risk and other x-risk” (which is mostly true: almost all popular consideration of AI risk past the level of Terminator movies has been brought on due to MIRI/FHI propagandizing) is not really very good evidence for “we’re effectively reducing the odds of AI being a problem and increasing the odds of a universe tiled in awesomeness”.
should empower them and leave them “free to optimize”
Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different “let-us-manage-your-lifestyle” organizations.
very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die.
Actually, I can’t think of any. But still, what does this all have to do with central planning?
Would you like me to amend from “central” planning to “external” planning? As in, organizations who attempt to plan people’s lives in an interfering sort of way? Sorry, I just want to check if we’re about to get into a massive argument about vocabulary or whether there’s some place we are actually talking about the same thing.
Interesting; I hadn’t previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.
Still, the analogy isn’t exact. If we’re talking about basic necessities, things like food and clothes, then the argument seems strong: people’s exact needs will differ in ways that aren’t easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won’t. I’m pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly’s consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome—market defects, in other words—that cash won’t do much for in the medium term, and it’s plausible to me that many of the EA community’s traditional beneficiaries do work in this space.
As to existential risk… well, that’s a completely different approach. To borrow a phrase from GiveWell’s blog, existential risk reduction is an extreme charity-as-investment strategy, and there’s very little decent analysis covering it. I don’t entirely trust MIRI’s in-house estimates, but I couldn’t point you to anything better, either.
I guess it’s mostly a terminology thing. I associate “central planning” with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.
If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don’t know if we disagree much.
In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.
This seems like a noncentral use of “centrally planned”, meaning something like “there exists a highly influential opinion leader” … or else a noncentral use of “EA”, meaning something like “give all your money to GiveWell and let them sort it out”.
It’s centrally planned in the sense that the optimizer behind it is a committee/bureaucracy, as opposed to say a market. Of course, a market in charity tries to optimize warm fuzzies so I don’t know of a better solution.
Edit: Or rather the problem is that effective charity is a credence good.
As a practicing socialist, I found the comparison to Communism illuminating and somewhat disturbing.
You’ve already listed some of the major, obvious aspects in which the Effective Altruism movement resembles Communism. Let me add another: failure to take account of local information and preferences.
Information: Communism (or as the socialists say: state capitalism, or as the dictionaries say: state socialism—centrally planned economies!) failed horrifically at the Economic Calculation Problem because no central planning system composed of humans can take account of all the localized, personal information inherent in real lives. Markets, on the other hand, can take advantage of this information, even if they’re not always good at it (see for a chuckle: “Markets are Efficient iff P=NP”). Effective altruism, being centrally planned, suffers this problem.
Preferences: the other major failure of Communist central planning was its foolish claim that the entirety of society had a single, uniform set of valuations over economic inputs and outputs which was determined by the planning committee in the Politburo. The result was, of course, that the system produced vast amounts of things the Politburo thought were Very Important (such as weapons, to Smash the Evil Capitalists), and vast amounts of capital inputs (that sometimes sat uselessly because nobody really wanted them), but very, very small amounts of things most people actually preferred (like meat for household consumption).
Given, as you’ve mentioned, the overwhelmingly uniform and centrally-planned nature of the Effective Altruism movement, you should expect to suffer exactly the same systematic problems as Communism. My best recommendation to fix the problem is to come up with an optimization metric for Doing Good that doesn’t involve your movement’s having to personally know and plan all the facts and all the values of each altruistic intervention from the top down. Find a metric by which you can encourage the philanthrophic/charitable system to optimize itself from the bottom up, and then unleash it!
Hold on a second. This is news to me.
What is it about EA being centrally planned?
My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people’s values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.
I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn’t suffer from the “we’re figuring out what other people value” problem as much as other things, but I also think that that’s almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.
I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.
At a first reapproximation to my thinking a week ago, I was thinking of things like this. Many acts of “charity” often consist of trying to manage the lives of the unfortunate for them, and evidence is emerging that, well, the unfortunate know their own needs better than we do: we should empower them and leave them “free to optimize”, so to speak.
Not that malaria relief or anything is a bad cause, but I generally have more “feeling” regarding poverty myself, since combating poverty over the middle-term (longer than a year, shorter than a generation, let’s say) tends to result in the individual benefactors becoming able to solve a lot of their other problems, and has generational knock-on effects (such as: reduced poverty leads to better nutrition and better building materials, meaning healthier, smarter children over time, meaning people can do more to solve their remaining issues, etc.).
And then I was also definitely thinking about people trying to “do maximum good” through existential-risk reduction donations (including MIRI, but not just MIRI), and how these donations tend to be… dubiously effective. Sure, we’re not dead yet, but very few organizations can evidentially demonstrate that they’re actively reducing the probability that we all die. That is, if I want to be less-probably dead next year than this year, I don’t know to whom to donate.
EDIT: Regarding the latter paragraph, I wish to note that I did give MIRI $72 this past year, this being calculated as the equivalent price of several Harry Potter novels for which the author deserved payment. If I become convinced that MIRI/FHI are actually effective in ensuring both that AI doesn’t kill us all off, and that they can do better than throw the human species in a permanent Medieval Stasis (ie: that they can “save the world”), resulting in the much-lauded futuristic utopia they use for their recruiting pitches, I will donate larger sums quite willingly. I also want to actually engage in the scientific/philosophical problems involved myself, just to be damn sure. So don’t think I’m insulting here, just pointing out that “we’re the only ones thinking about AI risk and other x-risk” (which is mostly true: almost all popular consideration of AI risk past the level of Terminator movies has been brought on due to MIRI/FHI propagandizing) is not really very good evidence for “we’re effectively reducing the odds of AI being a problem and increasing the odds of a universe tiled in awesomeness”.
Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different “let-us-manage-your-lifestyle” organizations.
Actually, I can’t think of any. But still, what does this all have to do with central planning?
Would you like me to amend from “central” planning to “external” planning? As in, organizations who attempt to plan people’s lives in an interfering sort of way? Sorry, I just want to check if we’re about to get into a massive argument about vocabulary or whether there’s some place we are actually talking about the same thing.
Interesting; I hadn’t previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.
Still, the analogy isn’t exact. If we’re talking about basic necessities, things like food and clothes, then the argument seems strong: people’s exact needs will differ in ways that aren’t easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won’t. I’m pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly’s consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome—market defects, in other words—that cash won’t do much for in the medium term, and it’s plausible to me that many of the EA community’s traditional beneficiaries do work in this space.
As to existential risk… well, that’s a completely different approach. To borrow a phrase from GiveWell’s blog, existential risk reduction is an extreme charity-as-investment strategy, and there’s very little decent analysis covering it. I don’t entirely trust MIRI’s in-house estimates, but I couldn’t point you to anything better, either.
Well, you just raised my opinion of GiveWell.
I guess it’s mostly a terminology thing. I associate “central planning” with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.
If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don’t know if we disagree much.
In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.
Currently choosing the latter.
This seems like a noncentral use of “centrally planned”, meaning something like “there exists a highly influential opinion leader” … or else a noncentral use of “EA”, meaning something like “give all your money to GiveWell and let them sort it out”.
Given that the context is comparison to communism, your explanation doesn’t look likely. But I’m sure Eli can explain his meaning if he wants to.
It’s centrally planned in the sense that the optimizer behind it is a committee/bureaucracy, as opposed to say a market. Of course, a market in charity tries to optimize warm fuzzies so I don’t know of a better solution.
Edit: Or rather the problem is that effective charity is a credence good.