CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can’s growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).
That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the “E”/ element as well as the “A”, which something along the lines of CFAR may be especially helpful for).
ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.
On reflection, this is an opportunity for me to be curious. The relevant community-builders I’m aware of are:
CFAR
80,000 Hours / CEA
GiveWell
Leverage Research
Whom am I leaving out?
My model for what they’re doing is this:
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don’t think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Do any of these descriptions seem off? If so, how?
Some comments below.
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.
Separately, how much of a difference is there between a three-day CFAR workshop and a temporary “rationality infusion”?
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality,
The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).
but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
As the post mentions, there isn’t clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
I don’t know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage’s THINK.
They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It’s explicitly pivoted away from that and community building since.
That makes sense. It depends on whether the bar is much higher than what there already is for “competent, rational” etc. AND how much better (if at all) CFAR is at making people so and finding those people. I think the first is pretty likely, but at this point the second is merely at the level of plausibility. (Which is still really impressive!)
The main problem with teaching generic success skills is already “those who can’t, teach”. Donations only exacerbate this problem by lowering the barrier to entry.
Only when there isn’t a secondary goal in mind. For example, apprenticeship is a process where someone who clearly can do, teaches, because the master recognizes that some of their tasks are better performed by novice apprentices than by themselves—and the only way to guarantee quality novice apprentices is to create them.
For CFAR, the magnum opus seems to be human uplift—a process where the doing and the teaching are simply different levels of the same process.
The point is that there are many people who want to spread their message on how to effectively attain your goals. Generally, the quality of message is going to positively correlate with success and thus negatively correlate with being short on money or depending on charitable contributions.
I am not sure what is your definition of “success”, but why exactly should getting money through contributions be worse than getting money by any other means?
If “success” is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum is, by definition, a success.
If “success” is something else, then… please be more specific.
If “success” is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum
Wait. The success at extracting from you this specific piece of money (the utility of donating which you ponder), is not yet decided. Furthermore, the prior success at finding actions that produce a lot of money, must have been quite low.
Artisan masters (or, to some extent, college professors, at least in scientific and technical fields) generally have a track record of being good at doing what they teach.
Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience). As far as I know, CFAR staff are no different in that regard.
EDIT:
And if you give them donations, they don’t even have to be good at it!
Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience).
As far as I know, CFAR staff are no different in that regard.
While to some extent I think this criticism may be valid, especially given the fact that it was a known factor prior to the foundation of CFAR, I think it’s not entirely fair. Given that CFAR is more or less attempting to create a new curriculum and area of study, it isn’t entirely clear what it would look like to have a proven track record in the field.
Now obviously CFAR would be more impressive if it was being run by Daniel Kahneman. But given that that isn’t going to happen, I think the organization that we have is doing a fairly good job, especially given that many of their staff members have impressive accomplishments in other domains.
it isn’t entirely clear what it would look like to have a proven track record in the field.
They want to teach people how to be rational, professionally successful, and altruistic, hence it would be desirable if the staff had strong credentials in that areas, such as being successful scientists, inventors, entrepreneurs, having done something that unquestionably helped many other people, etc.
especially given that many of their staff members have impressive accomplishments in other domains.
Such as?
According to the OP, CFAR has five full time employees. I suppose they are the first five people listed in the website (Galef, Salamon, Smith, Critch and Amodei). Galef is a blogger and podcaster, Amodei was a theatre stage manager, the others are mathematicians: Critch is the only PhD of them and has done some research in abstract computer science and applied math. I don’t have the expertise to evaluate his work, does it count as an impressive accomplishment? Salamon mostly worked at SIAI/SI/MIRI and didn’t publish much outside MIRI own venues and philosophical conferences. Smith, I don’t know because I cant find much information online.
EDIT:
Actually, according to the profile, Smith has a PhD in math education.
it isn’t entirely clear what it would look like to have a proven track record in the field.
Having a track record of creating something else that’s unambiguously useful would be a start.
Mostly, people attempt to do grand and exceptional things either due to having evidence (prior high performance, for example), or due to having delusions of grandeur (prior history of such delusions). Those are two very distinct categories.
On the other hand, the reason said enterprise is seeking donations is largely that the most involved member’s prior endeavours failed to monetize despite, in some cases, presence of some innate talents. A situation suggestive not of exceptionally superior but rather inferior rationality.
CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can’s growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).
That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the “E”/ element as well as the “A”, which something along the lines of CFAR may be especially helpful for).
ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.
On reflection, this is an opportunity for me to be curious. The relevant community-builders I’m aware of are:
CFAR
80,000 Hours / CEA
GiveWell
Leverage Research
Whom am I leaving out?
My model for what they’re doing is this:
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don’t think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Some comments below.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.
Separately, how much of a difference is there between a three-day CFAR workshop and a temporary “rationality infusion”?
The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).
As the post mentions, there isn’t clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.
I don’t know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage’s THINK.
They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.
Thank you, that’s helpful.
MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It’s explicitly pivoted away from that and community building since.
It would be nice if all that doubling helped save the world somehow, after all.
That makes sense. It depends on whether the bar is much higher than what there already is for “competent, rational” etc. AND how much better (if at all) CFAR is at making people so and finding those people. I think the first is pretty likely, but at this point the second is merely at the level of plausibility. (Which is still really impressive!)
The main problem with teaching generic success skills is already “those who can’t, teach”. Donations only exacerbate this problem by lowering the barrier to entry.
Only when there isn’t a secondary goal in mind. For example, apprenticeship is a process where someone who clearly can do, teaches, because the master recognizes that some of their tasks are better performed by novice apprentices than by themselves—and the only way to guarantee quality novice apprentices is to create them.
For CFAR, the magnum opus seems to be human uplift—a process where the doing and the teaching are simply different levels of the same process.
The point is that there are many people who want to spread their message on how to effectively attain your goals. Generally, the quality of message is going to positively correlate with success and thus negatively correlate with being short on money or depending on charitable contributions.
I am not sure what is your definition of “success”, but why exactly should getting money through contributions be worse than getting money by any other means?
If “success” is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum is, by definition, a success.
If “success” is something else, then… please be more specific.
Wait. The success at extracting from you this specific piece of money (the utility of donating which you ponder), is not yet decided. Furthermore, the prior success at finding actions that produce a lot of money, must have been quite low.
edit: besides, the end goal is wealth creation.
Artisan masters (or, to some extent, college professors, at least in scientific and technical fields) generally have a track record of being good at doing what they teach.
Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience).
As far as I know, CFAR staff are no different in that regard.
EDIT:
And if you give them donations, they don’t even have to be good at it!
While to some extent I think this criticism may be valid, especially given the fact that it was a known factor prior to the foundation of CFAR, I think it’s not entirely fair. Given that CFAR is more or less attempting to create a new curriculum and area of study, it isn’t entirely clear what it would look like to have a proven track record in the field.
Now obviously CFAR would be more impressive if it was being run by Daniel Kahneman. But given that that isn’t going to happen, I think the organization that we have is doing a fairly good job, especially given that many of their staff members have impressive accomplishments in other domains.
They want to teach people how to be rational, professionally successful, and altruistic, hence it would be desirable if the staff had strong credentials in that areas, such as being successful scientists, inventors, entrepreneurs, having done something that unquestionably helped many other people, etc.
Such as?
According to the OP, CFAR has five full time employees. I suppose they are the first five people listed in the website (Galef, Salamon, Smith, Critch and Amodei).
Galef is a blogger and podcaster, Amodei was a theatre stage manager, the others are mathematicians:
Critch is the only PhD of them and has done some research in abstract computer science and applied math. I don’t have the expertise to evaluate his work, does it count as an impressive accomplishment?
Salamon mostly worked at SIAI/SI/MIRI and didn’t publish much outside MIRI own venues and philosophical conferences.
Smith, I don’t know because I cant find much information online.
EDIT:
Actually, according to the profile, Smith has a PhD in math education.
Impressiveness exists in the map, not the territory—but I certainly think so.
Kinda. Science is inter-subjective. Whether or not somebody’s contributions are considered breakthroughs by domain experts is an empirical question.
Having a track record of creating something else that’s unambiguously useful would be a start.
Mostly, people attempt to do grand and exceptional things either due to having evidence (prior high performance, for example), or due to having delusions of grandeur (prior history of such delusions). Those are two very distinct categories.
Certainly—that’s what I was discussing when I wrote “many of their staff members have impressive accomplishments in other domains.”
On the other hand, the reason said enterprise is seeking donations is largely that the most involved member’s prior endeavours failed to monetize despite, in some cases, presence of some innate talents. A situation suggestive not of exceptionally superior but rather inferior rationality.