My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.
On one hand, I’m not sure that’s all of effective altruism. Those concerned about existential risk reduction, such as the MIRI, consider themselves part of effective altruism, and haven’t always been about quantifying the value of ensuring a flourishing future civilization of trillions of human-like descendants in terms of quality-adjusted life years (henceforth referred to as QALYs). On the other hand, at the 2014 Effective Altruism Summit (I attended, and it’s just a big EA conference), Eliezer Yudkowsky presented the potential value of the MIRI’s work, given their work would prevent a counter-factual extinction of humanity and Earth-originating intelligence, in terms of QALYs. It was some extravagantly big number expressed in scientific notation, calculated as the expected years of happy life for so many trillions of future people. This is just my impression, but I think Mr. Yudkowsky and the MIRI did this to accommodate the rest of the community’s knee-jerk demand for specific metrics.
I’ve also met several folk hailing from Less Wrong and its cluster in person-space with loftier visions of improving the fare of humanity in the nearer-term future, than just handing out mosquito nets or deworming children near the equator, who are lukewarm towards or supportive of effective altruism as a community. They seem to be dismissive of naive utilitarianism in effective altruism, too. I myself take issue with bringing too much utilitarianism injected into effective altruism. I think as effective altruism as a vehicle which took inspiration from utilitarianism, but would mostly serve as a motivator and coordinating network for pragmatic action among all sorts of people, rather than so much theory of ethics which can and should be picked apart. I admit we in effective altruism don’t tackle this issue well. This could be because the opinion that utilitarianism is overriding what could be the dynamic rationality of effective altruism is a minority one. I’m not confident I and like-minded others can change that for the better.
Evan—I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.
drethelin—What would be an example of a better alternative?
My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.
On one hand, I’m not sure that’s all of effective altruism. Those concerned about existential risk reduction, such as the MIRI, consider themselves part of effective altruism, and haven’t always been about quantifying the value of ensuring a flourishing future civilization of trillions of human-like descendants in terms of quality-adjusted life years (henceforth referred to as QALYs). On the other hand, at the 2014 Effective Altruism Summit (I attended, and it’s just a big EA conference), Eliezer Yudkowsky presented the potential value of the MIRI’s work, given their work would prevent a counter-factual extinction of humanity and Earth-originating intelligence, in terms of QALYs. It was some extravagantly big number expressed in scientific notation, calculated as the expected years of happy life for so many trillions of future people. This is just my impression, but I think Mr. Yudkowsky and the MIRI did this to accommodate the rest of the community’s knee-jerk demand for specific metrics.
I’ve also met several folk hailing from Less Wrong and its cluster in person-space with loftier visions of improving the fare of humanity in the nearer-term future, than just handing out mosquito nets or deworming children near the equator, who are lukewarm towards or supportive of effective altruism as a community. They seem to be dismissive of naive utilitarianism in effective altruism, too. I myself take issue with bringing too much utilitarianism injected into effective altruism. I think as effective altruism as a vehicle which took inspiration from utilitarianism, but would mostly serve as a motivator and coordinating network for pragmatic action among all sorts of people, rather than so much theory of ethics which can and should be picked apart. I admit we in effective altruism don’t tackle this issue well. This could be because the opinion that utilitarianism is overriding what could be the dynamic rationality of effective altruism is a minority one. I’m not confident I and like-minded others can change that for the better.
Evan—I am also involved in effective altruism, and am not a utilitarian. I am a consequentialist and often agree with the utilitarians in mundane situations, though.
drethelin—What would be an example of a better alternative?
I dont think anyone really CAN reliably consider all but the crudest higher order effects like population size...