I think this is a very well written and useful picture of what CFAR is up to. I applaud CFAR for writing this and it definitely puts me many steps closer to be willing to fund CFAR.
However, one concern of mine is that the altruistic value of CFAR does not seem to me to compare much to the value of other organizations expressly focused on do-gooding, like GiveWell or the Centre for Effective Altruism. It seems like CFAR would be a nice thing to fund once these organizations are already more secure in their own funding, but that’s not true yet. Any thoughts on this? (As a disclaimer, I think I have more detailed reservations about funding CFAR that I may discuss if this becomes a conversation, so don’t see me doing this in the future as moving the goalposts.)
I can give you a proof of concept, actual numbers and examples omitted.
Considered a simplified model where there are only two efficient charities, a direct one and CFAR, and no other helping is possible. If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.
In addition CFAR is explicitly trying to build a network of competent rational do-gooders, with the expectation that the gains will be more than linear, because of division of labor.
Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.
CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can’s growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).
That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the “E”/ element as well as the “A”, which something along the lines of CFAR may be especially helpful for).
ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.
On reflection, this is an opportunity for me to be curious. The relevant community-builders I’m aware of are:
CFAR
80,000 Hours / CEA
GiveWell
Leverage Research
Whom am I leaving out?
My model for what they’re doing is this:
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don’t think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Do any of these descriptions seem off? If so, how?
Some comments below.
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.
Separately, how much of a difference is there between a three-day CFAR workshop and a temporary “rationality infusion”?
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality,
The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).
but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
As the post mentions, there isn’t clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
I don’t know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage’s THINK.
They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It’s explicitly pivoted away from that and community building since.
That makes sense. It depends on whether the bar is much higher than what there already is for “competent, rational” etc. AND how much better (if at all) CFAR is at making people so and finding those people. I think the first is pretty likely, but at this point the second is merely at the level of plausibility. (Which is still really impressive!)
The main problem with teaching generic success skills is already “those who can’t, teach”. Donations only exacerbate this problem by lowering the barrier to entry.
Only when there isn’t a secondary goal in mind. For example, apprenticeship is a process where someone who clearly can do, teaches, because the master recognizes that some of their tasks are better performed by novice apprentices than by themselves—and the only way to guarantee quality novice apprentices is to create them.
For CFAR, the magnum opus seems to be human uplift—a process where the doing and the teaching are simply different levels of the same process.
The point is that there are many people who want to spread their message on how to effectively attain your goals. Generally, the quality of message is going to positively correlate with success and thus negatively correlate with being short on money or depending on charitable contributions.
I am not sure what is your definition of “success”, but why exactly should getting money through contributions be worse than getting money by any other means?
If “success” is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum is, by definition, a success.
If “success” is something else, then… please be more specific.
If “success” is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum
Wait. The success at extracting from you this specific piece of money (the utility of donating which you ponder), is not yet decided. Furthermore, the prior success at finding actions that produce a lot of money, must have been quite low.
Artisan masters (or, to some extent, college professors, at least in scientific and technical fields) generally have a track record of being good at doing what they teach.
Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience). As far as I know, CFAR staff are no different in that regard.
EDIT:
And if you give them donations, they don’t even have to be good at it!
Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience).
As far as I know, CFAR staff are no different in that regard.
While to some extent I think this criticism may be valid, especially given the fact that it was a known factor prior to the foundation of CFAR, I think it’s not entirely fair. Given that CFAR is more or less attempting to create a new curriculum and area of study, it isn’t entirely clear what it would look like to have a proven track record in the field.
Now obviously CFAR would be more impressive if it was being run by Daniel Kahneman. But given that that isn’t going to happen, I think the organization that we have is doing a fairly good job, especially given that many of their staff members have impressive accomplishments in other domains.
it isn’t entirely clear what it would look like to have a proven track record in the field.
They want to teach people how to be rational, professionally successful, and altruistic, hence it would be desirable if the staff had strong credentials in that areas, such as being successful scientists, inventors, entrepreneurs, having done something that unquestionably helped many other people, etc.
especially given that many of their staff members have impressive accomplishments in other domains.
Such as?
According to the OP, CFAR has five full time employees. I suppose they are the first five people listed in the website (Galef, Salamon, Smith, Critch and Amodei). Galef is a blogger and podcaster, Amodei was a theatre stage manager, the others are mathematicians: Critch is the only PhD of them and has done some research in abstract computer science and applied math. I don’t have the expertise to evaluate his work, does it count as an impressive accomplishment? Salamon mostly worked at SIAI/SI/MIRI and didn’t publish much outside MIRI own venues and philosophical conferences. Smith, I don’t know because I cant find much information online.
EDIT:
Actually, according to the profile, Smith has a PhD in math education.
it isn’t entirely clear what it would look like to have a proven track record in the field.
Having a track record of creating something else that’s unambiguously useful would be a start.
Mostly, people attempt to do grand and exceptional things either due to having evidence (prior high performance, for example), or due to having delusions of grandeur (prior history of such delusions). Those are two very distinct categories.
On the other hand, the reason said enterprise is seeking donations is largely that the most involved member’s prior endeavours failed to monetize despite, in some cases, presence of some innate talents. A situation suggestive not of exceptionally superior but rather inferior rationality.
If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.
I agree with you on this, but I think CEA is that meta-charity you’re talking about, not CFAR. The reason for this is that CFAR and CEA (via Giving What We Can and 80,000 Hours) are both focused on building a community of do-gooders, but only CEA is doing it explicitly.
My understanding from current CFAR workshops is that CFAR doesn’t have much content about effectively donating or effective altruism per se, though I could be missing something.
Is there any before / after analysis of CFAR attendees on metrics like amount of money donated or donation targets?
~
Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.
I agree this is the key benefit of CFAR, though I think it’s hard to know at the moment whether CFAR is going to adequately accomplish this (though I do agree that current CFAR material is high-quality and getting better).
That’s pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it’s possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.
It’s a complicated subject, of course, but my own impression is that CFAR is indeed a good place to donate on the present margin, from the perspective of long-term world-improvement, even bearing in mind that there are other organizations one could donate to that are focused on community building around effective altruism.
My reason for this is two-fold:
(1) Both epistemic rationality and strategicness really do seem to have high yield in an effective altruism context—and so it’s worth making a serious effort to see if we can increase these (I expect we can); and
(2) It’s worth having a portfolio that includes multiple strong efforts at creating high-impact people. CEA is awesome, and if I thought that it was about to falter and that CFAR was strong, I would be seeking to direct money to CEA. But the two organizations are non-redundant—CEA appeals largely to those who are already interested in altruism; CFAR appeals also to many potentially high-impact who are interested in entrepreneurship, or in increasing their own powers, or in rationality, and who have not yet thought seriously about do-gooding. (Who then may.)
The SPARC program (for highly math-talented high school students) seems particularly key to me as a potential influencer of future technology, and it would, I think, be much harder for other organizations in this space to run such a program.
I’d be glad to engage more directly with your concerns, if you want to fill them in a bit more—either here or by Skype. I suspect I’ll learn from the conversation regardless. Maybe CFAR’s strategy will also improve.
Sorry for the delayed response, but I’d be interested in hearing more. I think it would be easiest to just Skype, so I’ve scheduled a time slot for the 21st. I look forward to it.
It’d be great if someone from CFAR could spell out the case for its having a large positive impact (on the things we ultimately care about, such as human welfare). If I understand it correctly, Anna’s post suggests that CFAR will do good by creating a highly effective community of do-gooders, but this would benefit from a bit more substantiation. For example, could CFAR give some specific cases in which their training has increased the ultimate good done by its recipients? And could someone fully describe a typical or representative story by which CFAR training increases human welfare?
I think this is a very well written and useful picture of what CFAR is up to. I applaud CFAR for writing this and it definitely puts me many steps closer to be willing to fund CFAR.
However, one concern of mine is that the altruistic value of CFAR does not seem to me to compare much to the value of other organizations expressly focused on do-gooding, like GiveWell or the Centre for Effective Altruism. It seems like CFAR would be a nice thing to fund once these organizations are already more secure in their own funding, but that’s not true yet. Any thoughts on this? (As a disclaimer, I think I have more detailed reservations about funding CFAR that I may discuss if this becomes a conversation, so don’t see me doing this in the future as moving the goalposts.)
I can give you a proof of concept, actual numbers and examples omitted.
Considered a simplified model where there are only two efficient charities, a direct one and CFAR, and no other helping is possible. If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.
In addition CFAR is explicitly trying to build a network of competent rational do-gooders, with the expectation that the gains will be more than linear, because of division of labor.
Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.
CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can’s growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).
That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the “E”/ element as well as the “A”, which something along the lines of CFAR may be especially helpful for).
ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.
On reflection, this is an opportunity for me to be curious. The relevant community-builders I’m aware of are:
CFAR
80,000 Hours / CEA
GiveWell
Leverage Research
Whom am I leaving out?
My model for what they’re doing is this:
GiveWell isn’t trying to change much about people at all directly, except by helping them find efficient charities to give to. It’s selecting people by whether they’re already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary “rationality infusion,” but isn’t trying to alter anyone’s underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one’s own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn’t seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don’t think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Some comments below.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.
Separately, how much of a difference is there between a three-day CFAR workshop and a temporary “rationality infusion”?
The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).
As the post mentions, there isn’t clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.
I don’t know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage’s THINK.
They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.
Thank you, that’s helpful.
MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.
Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It’s explicitly pivoted away from that and community building since.
It would be nice if all that doubling helped save the world somehow, after all.
That makes sense. It depends on whether the bar is much higher than what there already is for “competent, rational” etc. AND how much better (if at all) CFAR is at making people so and finding those people. I think the first is pretty likely, but at this point the second is merely at the level of plausibility. (Which is still really impressive!)
The main problem with teaching generic success skills is already “those who can’t, teach”. Donations only exacerbate this problem by lowering the barrier to entry.
Only when there isn’t a secondary goal in mind. For example, apprenticeship is a process where someone who clearly can do, teaches, because the master recognizes that some of their tasks are better performed by novice apprentices than by themselves—and the only way to guarantee quality novice apprentices is to create them.
For CFAR, the magnum opus seems to be human uplift—a process where the doing and the teaching are simply different levels of the same process.
The point is that there are many people who want to spread their message on how to effectively attain your goals. Generally, the quality of message is going to positively correlate with success and thus negatively correlate with being short on money or depending on charitable contributions.
I am not sure what is your definition of “success”, but why exactly should getting money through contributions be worse than getting money by any other means?
If “success” is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum is, by definition, a success.
If “success” is something else, then… please be more specific.
Wait. The success at extracting from you this specific piece of money (the utility of donating which you ponder), is not yet decided. Furthermore, the prior success at finding actions that produce a lot of money, must have been quite low.
edit: besides, the end goal is wealth creation.
Artisan masters (or, to some extent, college professors, at least in scientific and technical fields) generally have a track record of being good at doing what they teach.
Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience).
As far as I know, CFAR staff are no different in that regard.
EDIT:
And if you give them donations, they don’t even have to be good at it!
While to some extent I think this criticism may be valid, especially given the fact that it was a known factor prior to the foundation of CFAR, I think it’s not entirely fair. Given that CFAR is more or less attempting to create a new curriculum and area of study, it isn’t entirely clear what it would look like to have a proven track record in the field.
Now obviously CFAR would be more impressive if it was being run by Daniel Kahneman. But given that that isn’t going to happen, I think the organization that we have is doing a fairly good job, especially given that many of their staff members have impressive accomplishments in other domains.
They want to teach people how to be rational, professionally successful, and altruistic, hence it would be desirable if the staff had strong credentials in that areas, such as being successful scientists, inventors, entrepreneurs, having done something that unquestionably helped many other people, etc.
Such as?
According to the OP, CFAR has five full time employees. I suppose they are the first five people listed in the website (Galef, Salamon, Smith, Critch and Amodei).
Galef is a blogger and podcaster, Amodei was a theatre stage manager, the others are mathematicians:
Critch is the only PhD of them and has done some research in abstract computer science and applied math. I don’t have the expertise to evaluate his work, does it count as an impressive accomplishment?
Salamon mostly worked at SIAI/SI/MIRI and didn’t publish much outside MIRI own venues and philosophical conferences.
Smith, I don’t know because I cant find much information online.
EDIT:
Actually, according to the profile, Smith has a PhD in math education.
Impressiveness exists in the map, not the territory—but I certainly think so.
Kinda. Science is inter-subjective. Whether or not somebody’s contributions are considered breakthroughs by domain experts is an empirical question.
Having a track record of creating something else that’s unambiguously useful would be a start.
Mostly, people attempt to do grand and exceptional things either due to having evidence (prior high performance, for example), or due to having delusions of grandeur (prior history of such delusions). Those are two very distinct categories.
Certainly—that’s what I was discussing when I wrote “many of their staff members have impressive accomplishments in other domains.”
On the other hand, the reason said enterprise is seeking donations is largely that the most involved member’s prior endeavours failed to monetize despite, in some cases, presence of some innate talents. A situation suggestive not of exceptionally superior but rather inferior rationality.
I agree with you on this, but I think CEA is that meta-charity you’re talking about, not CFAR. The reason for this is that CFAR and CEA (via Giving What We Can and 80,000 Hours) are both focused on building a community of do-gooders, but only CEA is doing it explicitly.
My understanding from current CFAR workshops is that CFAR doesn’t have much content about effectively donating or effective altruism per se, though I could be missing something.
Is there any before / after analysis of CFAR attendees on metrics like amount of money donated or donation targets?
~
I agree this is the key benefit of CFAR, though I think it’s hard to know at the moment whether CFAR is going to adequately accomplish this (though I do agree that current CFAR material is high-quality and getting better).
That’s pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it’s possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.
It’s a complicated subject, of course, but my own impression is that CFAR is indeed a good place to donate on the present margin, from the perspective of long-term world-improvement, even bearing in mind that there are other organizations one could donate to that are focused on community building around effective altruism.
My reason for this is two-fold:
(1) Both epistemic rationality and strategicness really do seem to have high yield in an effective altruism context—and so it’s worth making a serious effort to see if we can increase these (I expect we can); and
(2) It’s worth having a portfolio that includes multiple strong efforts at creating high-impact people. CEA is awesome, and if I thought that it was about to falter and that CFAR was strong, I would be seeking to direct money to CEA. But the two organizations are non-redundant—CEA appeals largely to those who are already interested in altruism; CFAR appeals also to many potentially high-impact who are interested in entrepreneurship, or in increasing their own powers, or in rationality, and who have not yet thought seriously about do-gooding. (Who then may.)
The SPARC program (for highly math-talented high school students) seems particularly key to me as a potential influencer of future technology, and it would, I think, be much harder for other organizations in this space to run such a program.
I’d be glad to engage more directly with your concerns, if you want to fill them in a bit more—either here or by Skype. I suspect I’ll learn from the conversation regardless. Maybe CFAR’s strategy will also improve.
Sorry for the delayed response, but I’d be interested in hearing more. I think it would be easiest to just Skype, so I’ve scheduled a time slot for the 21st. I look forward to it.
It’d be great if someone from CFAR could spell out the case for its having a large positive impact (on the things we ultimately care about, such as human welfare). If I understand it correctly, Anna’s post suggests that CFAR will do good by creating a highly effective community of do-gooders, but this would benefit from a bit more substantiation. For example, could CFAR give some specific cases in which their training has increased the ultimate good done by its recipients? And could someone fully describe a typical or representative story by which CFAR training increases human welfare?