I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other’s points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It’s a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it’s embryonic enough that everything is tied closely together. I tentatively agree that that is not desirable. But ideally, growth of thoroughly EA institutions should lead to specialization and independence. This will lead to a much more interesting ecosystem than if the intellectual work is largely outsourced.
Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other’s points of view, and maintain civil and constructive dialogue than I do for other people.
Why do you expect that to be true? How strongly? (“Ceteris paribus” could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?
Because they generally emphasize these values and practices when others don’t, and because they are part of a common tribe.
How strongly? (“Ceteris paribus” could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?
Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it’s just about people’s philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple additional years of relevant experience or schooling.
I do think that if you had a research-focused organization where everyone was an EA, it would be better to hire outsiders at the margin, because of the problems associated with homogeneity. (This wouldn’t the case for community-focused organizations.) I guess it just depends on where they are right now, which I’m not too sure about. If you’re only going to have 1 person doing the work, e.g. with an EA fund, then it’s better for it to be done by an EA.
Could hypothetically also make them more vulnerable to a person who correctly uses the right buzzwords to gain their trust for ill purposes, while someone who is not a member of the same tribe would be more skeptical.
I haven’t seen any parts of Givewell’s analyses that involve looking for the right buzzwords. Of course, it’s possible that certain buzzwords subconsciously manipulate people at Givewell in certain ways, but the same can be said for any group, because every group has some sort of values.
I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other’s points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It’s a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it’s embryonic enough that everything is tied closely together. I tentatively agree that that is not desirable. But ideally, growth of thoroughly EA institutions should lead to specialization and independence. This will lead to a much more interesting ecosystem than if the intellectual work is largely outsourced.
> And secondly, EA simply has the right values.
I think this is false, because I think EA is too heterogeneous to count as having the same set of values.
Why do you expect that to be true? How strongly? (“Ceteris paribus” could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?
Because they generally emphasize these values and practices when others don’t, and because they are part of a common tribe.
Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it’s just about people’s philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple additional years of relevant experience or schooling.
I do think that if you had a research-focused organization where everyone was an EA, it would be better to hire outsiders at the margin, because of the problems associated with homogeneity. (This wouldn’t the case for community-focused organizations.) I guess it just depends on where they are right now, which I’m not too sure about. If you’re only going to have 1 person doing the work, e.g. with an EA fund, then it’s better for it to be done by an EA.
Could hypothetically also make them more vulnerable to a person who correctly uses the right buzzwords to gain their trust for ill purposes, while someone who is not a member of the same tribe would be more skeptical.
I haven’t seen any parts of Givewell’s analyses that involve looking for the right buzzwords. Of course, it’s possible that certain buzzwords subconsciously manipulate people at Givewell in certain ways, but the same can be said for any group, because every group has some sort of values.