rationality does not involve values and altruism is all about values. They are orthogonal.
Effective altruism isn’t just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn’t value-driven: it’s that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I’d call that a form of EA despite the differences between its conception of utility and GiveWell’s.
Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don’t dictate values, and (social pressure aside) we probably can’t talk people into EA if their value structure isn’t compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.
Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that’s all they’re doing then I wouldn’t call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn’t seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes.
I’m really not sure what you’re trying to demonstrate here. Some people have values incompatible with EA’s assumptions? That’s true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn’t the case. As far as I can tell there’s plenty of room for optimization.
(It does establish an upper bound, but EA’s market penetration, even after any possible LW influence, is nowhere near it.)
I’m really not sure what you’re trying to demonstrate here.
That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and “effective” plays a second fiddle to it. That rationality does not imply altruism (in case you think it’s a strawman, tom_cr seems to claim exactly that).
If effective altruism was predominantly just altruism, we wouldn’t be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it’s something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality’s silence on pure questions of values.
Yes, it’s just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant—and, perhaps more importantly, a lot less intuitive -- than I think you’re giving it credit for.
we wouldn’t be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been
I don’t know about that. First, EA is competition for the limited resource, the donors’ money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that’s my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck.
Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending—to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a “traditional” or an “effective” altruist? I don’t know.
and, perhaps more importantly, a lot less intuitive
Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don’t know if that’s enough to push EA into a separate category of its own.
Effective altruism isn’t just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn’t value-driven: it’s that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I’d call that a form of EA despite the differences between its conception of utility and GiveWell’s.
Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don’t dictate values, and (social pressure aside) we probably can’t talk people into EA if their value structure isn’t compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.
Yes, of course.
So does effective proselytizing, for example. Or effective political propaganda.
Take away the “presupposed values” and all you are left with is effectiveness.
Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that’s all they’re doing then I wouldn’t call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn’t seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes.
I’m really not sure what you’re trying to demonstrate here. Some people have values incompatible with EA’s assumptions? That’s true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn’t the case. As far as I can tell there’s plenty of room for optimization.
(It does establish an upper bound, but EA’s market penetration, even after any possible LW influence, is nowhere near it.)
That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and “effective” plays a second fiddle to it. That rationality does not imply altruism (in case you think it’s a strawman, tom_cr seems to claim exactly that).
If effective altruism was predominantly just altruism, we wouldn’t be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it’s something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality’s silence on pure questions of values.
Yes, it’s just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant—and, perhaps more importantly, a lot less intuitive -- than I think you’re giving it credit for.
I don’t know about that. First, EA is competition for the limited resource, the donors’ money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that’s my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck.
Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending—to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a “traditional” or an “effective” altruist? I don’t know.
Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don’t know if that’s enough to push EA into a separate category of its own.