We have a fair amount of data on the experiences of people who have been to CFAR workshops.
First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn’s (also linked above under “C”) seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that “The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect.”
In-person conversations that I’ve had with alumni (including some interviews that I’ve done with alumni about the impact that the workshop had on their life) have tended to paint a similar picture to these reviews, from a broader set of people, but it’s harder for me to share those data.
We don’t have as much data on the experiences of people who have been to test sessions or shorter events. I suspect that most people who come to shorter events have a positive experience, and that there’s a modest benefit on average, but that it’s less uniformly positive. Partly that’s because there’s a bunch of stuff that happens with a full workshop that doesn’t fit in a briefer event—more time for conversations between participants to digest the material, more time for one-on-one conversations with CFAR staff to sort through things, followups after the workshop to work with someone on implementing things in your daily life, etc. The full workshop is also more practiced and polished (it has been through many more iterations) - much moreso than a test session; one-day events are in between (the ones advertised as alpha tests of a new thing are closer to the test session end of the spectrum).
We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3.
I’ve seen CFAR talk about this before, and I don’t view it as strong evidence that CFAR is valuable.
If people pay a lot of money for something that’s not worth it, we’d expect them to rate it as valuable by the principle of cognitive dissonance.
If people rate something as valuable, is it because it improved their lives, or because it made them feel good?
For these ratings to be meaningful, I’d like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don’t suggest actually doing this. Perhaps “derpy self-improvement” workshops can serve as a control?
Hey Dan, thanks for responding. I wanted to ask a few questions:
You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?
“0 to 10, are you glad you came?” This is a biased question, because you frame that the person is glad. A similar negative question may say “0 to 10, are you dissatisfied that you came?” Would it be possible to anonymize and post the survey questions and data?
We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
It’s great that you’re following up with people long after the workshops end. Why not survey all alumni? You have their emails.
I’ve read most of the blog posts about CFAR workshops that you linked to—they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren’t satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff?
Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.
(This is Dan from CFAR again)
We have a fair amount of data on the experiences of people who have been to CFAR workshops.
First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn’s (also linked above under “C”) seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that “The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect.”
In-person conversations that I’ve had with alumni (including some interviews that I’ve done with alumni about the impact that the workshop had on their life) have tended to paint a similar picture to these reviews, from a broader set of people, but it’s harder for me to share those data.
We don’t have as much data on the experiences of people who have been to test sessions or shorter events. I suspect that most people who come to shorter events have a positive experience, and that there’s a modest benefit on average, but that it’s less uniformly positive. Partly that’s because there’s a bunch of stuff that happens with a full workshop that doesn’t fit in a briefer event—more time for conversations between participants to digest the material, more time for one-on-one conversations with CFAR staff to sort through things, followups after the workshop to work with someone on implementing things in your daily life, etc. The full workshop is also more practiced and polished (it has been through many more iterations) - much moreso than a test session; one-day events are in between (the ones advertised as alpha tests of a new thing are closer to the test session end of the spectrum).
I’ve seen CFAR talk about this before, and I don’t view it as strong evidence that CFAR is valuable.
If people pay a lot of money for something that’s not worth it, we’d expect them to rate it as valuable by the principle of cognitive dissonance.
If people rate something as valuable, is it because it improved their lives, or because it made them feel good?
For these ratings to be meaningful, I’d like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don’t suggest actually doing this. Perhaps “derpy self-improvement” workshops can serve as a control?
Hey Dan, thanks for responding. I wanted to ask a few questions:
You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?
“0 to 10, are you glad you came?” This is a biased question, because you frame that the person is glad. A similar negative question may say “0 to 10, are you dissatisfied that you came?” Would it be possible to anonymize and post the survey questions and data?
It’s great that you’re following up with people long after the workshops end. Why not survey all alumni? You have their emails.
I’ve read most of the blog posts about CFAR workshops that you linked to—they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren’t satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff?
Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.