On reflection, this is an opportunity for me to be curious. The relevant community-builders I’m aware of are:
CFAR
80,000 Hours / CEA
GiveWell
Leverage Research
Whom am I leaving out?
My model for what they’re doing is this:
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don’t think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Do any of these descriptions seem off? If so, how?
Some comments below.
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.
Separately, how much of a difference is there between a three-day CFAR workshop and a temporary “rationality infusion”?
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality,
The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).
but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
As the post mentions, there isn’t clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
I don’t know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage’s THINK.
They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It’s explicitly pivoted away from that and community building since.
On reflection, this is an opportunity for me to be curious. The relevant community-builders I’m aware of are:
CFAR
80,000 Hours / CEA
GiveWell
Leverage Research
Whom am I leaving out?
My model for what they’re doing is this:
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don’t think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Some comments below.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.
Separately, how much of a difference is there between a three-day CFAR workshop and a temporary “rationality infusion”?
The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).
As the post mentions, there isn’t clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.
I don’t know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage’s THINK.
They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.
Thank you, that’s helpful.
MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It’s explicitly pivoted away from that and community building since.