I’d like to ask LessWrong’s advice. I want to benefit from CFAR’s knowledge on improving ones instrumental rationality, but being a poor graduate I do not have several thousand in disposable income nor a quick way to acquire it. I’ve read >90% of the sequences but despite having read lukeprog’s and Alicorn’s sequences I am aware that I do not know what I do not know about motivation and akrasia. How can I best improve my instrumental rationality on the cheap?
Edit: I should clarify, I am asking for information sources: blogs, book recommendations, particularly practice exercises and other areas of high quality content. I also have a good deal of interest in the science behind motivation, cognitive rewiring and reinforcement. I’ve searched myself and I have a number of things on my reading list, but I wanted to ask the advice of people who have already done, read or vetted said techniques so I can find and focus on the good stuff and ignore the pseudoscience.
I’ve been to several of CFAR’s classes throughout the last 2 years (some test classes and some more ‘official’ ones) and I feel like it wasn’t a good use of my time. Spend your money elsewhere.
I didn’t learn anything useful. They taught, among other things, “here’s what you should do to gain better habits”. Tried it and didn’t work on me. YMMV.
One thing that really irked me was the use of cognitive ‘science’ to justify their lessons ‘scientifically’. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I’m not sure what the correct phrase is—the words weren’t constraining beliefs? don’t pay rent? they could have made up scientific sounding words and it would have had the same effect.)
Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I’ve previously read on the list and they had absolutely no idea what I was talking about.
ETA: I might go to another class in a year or two to see if they’ve improved. Not convinced that they’re worth donating money towards at this moment.
We have a fair amount of data on the experiences of people who have been to CFAR workshops.
First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn’s (also linked above under “C”) seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that “The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect.”
In-person conversations that I’ve had with alumni (including some interviews that I’ve done with alumni about the impact that the workshop had on their life) have tended to paint a similar picture to these reviews, from a broader set of people, but it’s harder for me to share those data.
We don’t have as much data on the experiences of people who have been to test sessions or shorter events. I suspect that most people who come to shorter events have a positive experience, and that there’s a modest benefit on average, but that it’s less uniformly positive. Partly that’s because there’s a bunch of stuff that happens with a full workshop that doesn’t fit in a briefer event—more time for conversations between participants to digest the material, more time for one-on-one conversations with CFAR staff to sort through things, followups after the workshop to work with someone on implementing things in your daily life, etc. The full workshop is also more practiced and polished (it has been through many more iterations) - much moreso than a test session; one-day events are in between (the ones advertised as alpha tests of a new thing are closer to the test session end of the spectrum).
We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3.
I’ve seen CFAR talk about this before, and I don’t view it as strong evidence that CFAR is valuable.
If people pay a lot of money for something that’s not worth it, we’d expect them to rate it as valuable by the principle of cognitive dissonance.
If people rate something as valuable, is it because it improved their lives, or because it made them feel good?
For these ratings to be meaningful, I’d like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don’t suggest actually doing this. Perhaps “derpy self-improvement” workshops can serve as a control?
Hey Dan, thanks for responding. I wanted to ask a few questions:
You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?
“0 to 10, are you glad you came?” This is a biased question, because you frame that the person is glad. A similar negative question may say “0 to 10, are you dissatisfied that you came?” Would it be possible to anonymize and post the survey questions and data?
We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
It’s great that you’re following up with people long after the workshops end. Why not survey all alumni? You have their emails.
I’ve read most of the blog posts about CFAR workshops that you linked to—they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren’t satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff?
Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.
Hi cursed—glad to hear your feedback, though I’m obviously not glad that you didn’t have a good experience at the CFAR events you went to.
I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR’s curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I’ll leave a separate comment about thing 2.
Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer’s research). Some of our techniques are not from the academic literature (e.g., the technique that we call “propagating urges” started out in 2011 as something that CFAR co-founder Andrew Critch did).
The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The “theorizing” step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With “propagating urges,” at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he’d been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I’m tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.)
Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to come up with implementation intentions for a particular goal make more progress towards that goal. But I don’t know of any academic research on attempts to teach people the skill of being able to create implementation intentions, and the habit of actually using them in day-to-day life. And that’s what we’re trying to do at CFAR workshops, so that class has gone through a similar process of iteration as we get feedback on whether people are making use of implementation intentions and how it goes for them. (One simple change that helped get more people to use implementation intentions: giving the technique a different name. We now call it “trigger action planning”).
So the cognitive science literature plays both of these roles for us: it’s a source of evidence about particular techniques that have been tested and found to work (or to not work), and it’s a source of models of how the mind works so that we can develop better techniques. We mention both of these types of scientific references in class (and in the further resources), and we try to be careful to distinguish them. Sharing our models in class (e.g., saying a few sentences in the propagating urges class about what we think the orbitofrontal cortex might be doing in this process) seems to be helpful for getting people to use the technique as we understand it (rather than getting confused about the steps, or rounding the technique off to the nearest cached thought). It also seems to help with getting people to take ownership of the technique and treat it as something that they can tinker with, rather than as a rote series of steps for them to follow (cf. learned blankness).
Finally, a brief comment on this:
Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I’ve previously read on the list and they had absolutely no idea what I was talking about.
Each CFAR class has one staff member who takes the lead in developing the class, and I’m the research specialist who does a lot of digging into the literature and sharing/discussing research with whoever is developing the class. The aim is for the two of us to be conversant in the relevant academic literature. For the rest of the CFAR team, the priority is to be able to use the techniques and help other people use them, not to know all the studies. (Often there will be more than just us two puzzling things over together, but it typically isn’t the whole team.) The instructor who teaches a class at a CFAR event isn’t always the person who has been developing it, especially at one-day events which are just being run by 2 instructors instead of the full CFAR staff. If I’d been at the event you came to, the instructor who you asked about the articles probably would’ve referred you to me and we could’ve had an interesting conversation.
Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn’t have much to learn or because the actual techniques were not effective? Do you think your experience was typical? How useful do you think it would be to an average person? An average rationalist?
Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn’t have much to learn or because the actual techniques were not effective?
I don’t believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the ‘science’.
Do you think your experience was typical?
I believe that the majority of people don’t get much, if anything, from CFAR’s rationality lessons. However, after the lesson, people may be slightly more motivated to accomplish whatever they want to, in the short term just because they’ve paid money towards a course to increase their motivation.
How useful do you think it would be to an average person?
There was one average person at one of the workshops I attended. e.g. never read LessWrong/other rationality material. He fell asleep a few hours into the lesson, I don’t think he gained much from attending. I’m hesitant to extrapolate, because I’m not exactly sure what an average person entails.
An average rationalist?
I haven’t met many rationalists, but would believe they wouldn’t benefit much/at all.
Well that’s a bit dispiriting, though I suppose looking back my view of CFAR was a bit unrealistic. Downregulating chance that CFAR is some kind of panacea.
Hi cursed—glad to hear your feedback, though I’m obviously not glad that you didn’t have a good experience at the CFAR events you went to.
I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR’s curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I’ll leave a separate comment about thing 2.
Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer’s research). Some of our techniques are not from the academic literature (e.g., the technique that we call “propagating urges” started out in 2011 as something that CFAR co-founder Andrew Critch did).
The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The “theorizing” step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With “propagating urges,” at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he’d been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I’m tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.)
Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to come up with implementation intentions for a particular goal make more progress towards that goal. But I don’t know of any academic research on attempts to teach people the skill of being able to create implementation intentions, and the habit of actually using them in day-to-day life. And that’s what we’re trying to do at CFAR workshops, so that class has gone through a similar process of iteration as we get feedback on whether people are making use of implementation intentions and how it goes for them. (One simple change that helped get more people to use implementation intentions: giving the technique a different name. We now call it “trigger action planning”).
So the cognitive science literature plays both of these roles for us: it’s a source of evidence about particular techniques that have been tested and found to work (or to not work), and it’s a source of models of how the mind works so that we can develop better techniques. We mention both of these types of scientific references in class (and in the further resources), and we try to be careful to distinguish them. Sharing our models in class (e.g., saying a few sentences in the propagating urges class about what we think the orbitofrontal cortex might be doing in this process) seems to be helpful for getting people to use the technique as we understand it (rather than getting confused about the steps, or rounding the technique off to the nearest cached thought). It also seems to help with getting people to take ownership of the technique and treat it as something that they can tinker with, rather than as a rote series of steps for them to follow (cf. learned blankness).
Finally, a brief comment on this:
Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I’ve previously read on the list and they had absolutely no idea what I was talking about.
Each CFAR class has one staff member who takes the lead in developing the class, and I’m the research specialist who does a lot of digging into the literature and sharing/discussing research with whoever is developing the class. The aim is for the two of us to be conversant in the relevant academic literature. For the rest of the CFAR team, the priority is to be able to use the techniques and help other people use them, not to know all the studies. (Often there will be more than just us two puzzling things over together, but it typically isn’t the whole team.) The instructor who teaches a class at a CFAR event isn’t always the person who has been developing it, especially at one-day events which are just being run by 2 instructors instead of the full CFAR staff. If I’d been at the event you came to, the instructor who you asked about the articles probably would’ve referred you to me and we could’ve had an interesting conversation.
It occurs to me that CFAR’s model of expensive workshops and generous grants to the impoverished (note: I am guessing about the generosity) is likely to produce rather odd demographics: there’s probably a really big gap between (1) the level of wealth/income at which you could afford to go, and (2) the level of wealth/income at which you would feel comfortable going, especially as—see e.g. cursed’s comments in this thread—it’s reasonable to have a lot of doubt about whether they’re worth the cost. (The offer of a refund mitigates that a bit.)
Super-handwavy quantification of the above: I would be really surprised if a typical person whose annual income is $30k or more were eligible for CFAR financial aid. I would be really surprised if a typical person whose income is $150k or less were willing to blow $4k on a CFAR workshop. (NB: “typical”. It’s easy to imagine exceptions.) Accordingly, I would guess that a typical CFAR workshop is attended mostly by people in three categories: impoverished grad students, etc., who are getting big discounts; people on six-figure salaries, many of them quite substantial six-figure salaries; and True Believers who are exceptionally convinced of the value of CFAR-style rationality, and willing to make a hefty sacrifice to attend.
I’m not suggesting that there’s anything wrong with that. In fact, it strikes me as a pretty good recipe for getting an interesting mix of people. But it does mean there’s something of a demographic “hole”.
I rather think there may be demand for a cheaper, less time dependent method of attending. It may be several seasons before they end up back in my country for example. Streaming/recording the whole thing and selling the video package seems like it could still get a lot of the benefits across. Their current strategy only really makes sense to me if they’re still in the testing and refining stage.
Their current strategy only really makes sense to me if they’re still in the testing and refining stage.
I think they are. If everything goes well they will have published papers that proves that their stuff works by the time they move out of the testing and refining stage.
You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.
Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.
I’m on the LW meetups already as it happens. I’m currently attempting to have my local one include more instrumental rationality but I lack a decent guide of what methods work, what techniques to try or what games are fun and useful. For that matter I don’t know what games there are at all beyond a post or two I stumbled upon.
Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.
You could ask Metus how much they covered for them, or someone at CFAR how much they’d be willing to cover. The costs for asking are small, and you won’t get anything you don’t ask for.
Fair point, done. On a related note, I wonder how I can practice convincing my brain that failure does not mean death like it did in the old ancestral environment.
Exposure therapy: Fail on small things, then larger ones, where it is obvious that failiure doesn’t mean death. First remember past experiences where you failed and did not die, then go into new situations.
I’d like to ask LessWrong’s advice. I want to benefit from CFAR’s knowledge on improving ones instrumental rationality, but being a poor graduate I do not have several thousand in disposable income nor a quick way to acquire it. I’ve read >90% of the sequences but despite having read lukeprog’s and Alicorn’s sequences I am aware that I do not know what I do not know about motivation and akrasia. How can I best improve my instrumental rationality on the cheap?
Edit: I should clarify, I am asking for information sources: blogs, book recommendations, particularly practice exercises and other areas of high quality content. I also have a good deal of interest in the science behind motivation, cognitive rewiring and reinforcement. I’ve searched myself and I have a number of things on my reading list, but I wanted to ask the advice of people who have already done, read or vetted said techniques so I can find and focus on the good stuff and ignore the pseudoscience.
I’ve been to several of CFAR’s classes throughout the last 2 years (some test classes and some more ‘official’ ones) and I feel like it wasn’t a good use of my time. Spend your money elsewhere.
What made it poor use of your time?
I didn’t learn anything useful. They taught, among other things, “here’s what you should do to gain better habits”. Tried it and didn’t work on me. YMMV.
One thing that really irked me was the use of cognitive ‘science’ to justify their lessons ‘scientifically’. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I’m not sure what the correct phrase is—the words weren’t constraining beliefs? don’t pay rent? they could have made up scientific sounding words and it would have had the same effect.)
Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I’ve previously read on the list and they had absolutely no idea what I was talking about.
ETA: I might go to another class in a year or two to see if they’ve improved. Not convinced that they’re worth donating money towards at this moment.
(This is Dan from CFAR again)
We have a fair amount of data on the experiences of people who have been to CFAR workshops.
First, systematic quantitative data. We send out a feedback survey a few days after the workshop which includes the question “0 to 10, are you glad you came?” The average response to that question is 9.3. We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.
Less systematically but in more fleshed out detail, there are several reviews that people who have attended a CFAR workshop have posted to their blogs (A, B+pt2, C +pt2) or to LW (1, 2, 3). Ben Kuhn’s (also linked above under “C”) seems particularly relevant here, becaue he went into the workshop assigning a 50% probability to the hypothesis that “The workshop is a standard derpy self-improvement technique: really good at making people feel like they’re getting better at things, but has no actual effect.”
In-person conversations that I’ve had with alumni (including some interviews that I’ve done with alumni about the impact that the workshop had on their life) have tended to paint a similar picture to these reviews, from a broader set of people, but it’s harder for me to share those data.
We don’t have as much data on the experiences of people who have been to test sessions or shorter events. I suspect that most people who come to shorter events have a positive experience, and that there’s a modest benefit on average, but that it’s less uniformly positive. Partly that’s because there’s a bunch of stuff that happens with a full workshop that doesn’t fit in a briefer event—more time for conversations between participants to digest the material, more time for one-on-one conversations with CFAR staff to sort through things, followups after the workshop to work with someone on implementing things in your daily life, etc. The full workshop is also more practiced and polished (it has been through many more iterations) - much moreso than a test session; one-day events are in between (the ones advertised as alpha tests of a new thing are closer to the test session end of the spectrum).
I’ve seen CFAR talk about this before, and I don’t view it as strong evidence that CFAR is valuable.
If people pay a lot of money for something that’s not worth it, we’d expect them to rate it as valuable by the principle of cognitive dissonance.
If people rate something as valuable, is it because it improved their lives, or because it made them feel good?
For these ratings to be meaningful, I’d like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don’t suggest actually doing this. Perhaps “derpy self-improvement” workshops can serve as a control?
Hey Dan, thanks for responding. I wanted to ask a few questions:
You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?
“0 to 10, are you glad you came?” This is a biased question, because you frame that the person is glad. A similar negative question may say “0 to 10, are you dissatisfied that you came?” Would it be possible to anonymize and post the survey questions and data?
It’s great that you’re following up with people long after the workshops end. Why not survey all alumni? You have their emails.
I’ve read most of the blog posts about CFAR workshops that you linked to—they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren’t satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff?
Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.
(Dan from CFAR here)
Hi cursed—glad to hear your feedback, though I’m obviously not glad that you didn’t have a good experience at the CFAR events you went to.
I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR’s curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I’ll leave a separate comment about thing 2.
Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer’s research). Some of our techniques are not from the academic literature (e.g., the technique that we call “propagating urges” started out in 2011 as something that CFAR co-founder Andrew Critch did).
The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The “theorizing” step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With “propagating urges,” at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he’d been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I’m tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.)
Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to come up with implementation intentions for a particular goal make more progress towards that goal. But I don’t know of any academic research on attempts to teach people the skill of being able to create implementation intentions, and the habit of actually using them in day-to-day life. And that’s what we’re trying to do at CFAR workshops, so that class has gone through a similar process of iteration as we get feedback on whether people are making use of implementation intentions and how it goes for them. (One simple change that helped get more people to use implementation intentions: giving the technique a different name. We now call it “trigger action planning”).
So the cognitive science literature plays both of these roles for us: it’s a source of evidence about particular techniques that have been tested and found to work (or to not work), and it’s a source of models of how the mind works so that we can develop better techniques. We mention both of these types of scientific references in class (and in the further resources), and we try to be careful to distinguish them. Sharing our models in class (e.g., saying a few sentences in the propagating urges class about what we think the orbitofrontal cortex might be doing in this process) seems to be helpful for getting people to use the technique as we understand it (rather than getting confused about the steps, or rounding the technique off to the nearest cached thought). It also seems to help with getting people to take ownership of the technique and treat it as something that they can tinker with, rather than as a rote series of steps for them to follow (cf. learned blankness).
Finally, a brief comment on this:
Each CFAR class has one staff member who takes the lead in developing the class, and I’m the research specialist who does a lot of digging into the literature and sharing/discussing research with whoever is developing the class. The aim is for the two of us to be conversant in the relevant academic literature. For the rest of the CFAR team, the priority is to be able to use the techniques and help other people use them, not to know all the studies. (Often there will be more than just us two puzzling things over together, but it typically isn’t the whole team.) The instructor who teaches a class at a CFAR event isn’t always the person who has been developing it, especially at one-day events which are just being run by 2 instructors instead of the full CFAR staff. If I’d been at the event you came to, the instructor who you asked about the articles probably would’ve referred you to me and we could’ve had an interesting conversation.
Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn’t have much to learn or because the actual techniques were not effective? Do you think your experience was typical? How useful do you think it would be to an average person? An average rationalist?
I don’t believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the ‘science’.
I believe that the majority of people don’t get much, if anything, from CFAR’s rationality lessons. However, after the lesson, people may be slightly more motivated to accomplish whatever they want to, in the short term just because they’ve paid money towards a course to increase their motivation.
There was one average person at one of the workshops I attended. e.g. never read LessWrong/other rationality material. He fell asleep a few hours into the lesson, I don’t think he gained much from attending. I’m hesitant to extrapolate, because I’m not exactly sure what an average person entails.
I haven’t met many rationalists, but would believe they wouldn’t benefit much/at all.
Well that’s a bit dispiriting, though I suppose looking back my view of CFAR was a bit unrealistic. Downregulating chance that CFAR is some kind of panacea.
(Dan from CFAR here)
Hi cursed—glad to hear your feedback, though I’m obviously not glad that you didn’t have a good experience at the CFAR events you went to.
I want to share a bit of information from my point of view (as a researcher at CFAR) on 1) the role of the cognitive science literature in CFAR’s curriculum and 2) the typical experience of the people who come to a CFAR workshop. This comment is about the science; I’ll leave a separate comment about thing 2.
Some of the techniques that CFAR teaches are based pretty directly on things from the academic literature (e.g., implementation intentions come straight from Peter Gollwitzer’s research). Some of our techniques are not from the academic literature (e.g., the technique that we call “propagating urges” started out in 2011 as something that CFAR co-founder Andrew Critch did).
The not-from-the-literature techniques have been through a process of iteration, where we theorize about how we think the technique works, then (with the aid of our best current model) we try to teach people to use the technique, and then we get feedback on how it goes for them. Then repeat. The “theorizing” step of this process includes digging into the academic literature to get a better understanding of how the relevant parts of the mind work, and that often plays a role in shaping the class. With “propagating urges,” at first none of the people that Critch taught it to were able to get it to work for them, but then Critch made a connection to some neuroscience he’d been reading, we updated our model of how the technique was supposed to work, and then more people were able to make use of the technique. (I’m tempted to go into more specifics here, but that feels like a tangent and this comment is going to be long enough without it.)
Classes based on from-the-academic-literature techniques also go through a similar process of iteration. For example, there are a lot of studies that have shown that people who are instructed to come up with implementation intentions for a particular goal make more progress towards that goal. But I don’t know of any academic research on attempts to teach people the skill of being able to create implementation intentions, and the habit of actually using them in day-to-day life. And that’s what we’re trying to do at CFAR workshops, so that class has gone through a similar process of iteration as we get feedback on whether people are making use of implementation intentions and how it goes for them. (One simple change that helped get more people to use implementation intentions: giving the technique a different name. We now call it “trigger action planning”).
So the cognitive science literature plays both of these roles for us: it’s a source of evidence about particular techniques that have been tested and found to work (or to not work), and it’s a source of models of how the mind works so that we can develop better techniques. We mention both of these types of scientific references in class (and in the further resources), and we try to be careful to distinguish them. Sharing our models in class (e.g., saying a few sentences in the propagating urges class about what we think the orbitofrontal cortex might be doing in this process) seems to be helpful for getting people to use the technique as we understand it (rather than getting confused about the steps, or rounding the technique off to the nearest cached thought). It also seems to help with getting people to take ownership of the technique and treat it as something that they can tinker with, rather than as a rote series of steps for them to follow (cf. learned blankness).
Finally, a brief comment on this:
Each CFAR class has one staff member who takes the lead in developing the class, and I’m the research specialist who does a lot of digging into the literature and sharing/discussing research with whoever is developing the class. The aim is for the two of us to be conversant in the relevant academic literature. For the rest of the CFAR team, the priority is to be able to use the techniques and help other people use them, not to know all the studies. (Often there will be more than just us two puzzling things over together, but it typically isn’t the whole team.) The instructor who teaches a class at a CFAR event isn’t always the person who has been developing it, especially at one-day events which are just being run by 2 instructors instead of the full CFAR staff. If I’d been at the event you came to, the instructor who you asked about the articles probably would’ve referred you to me and we could’ve had an interesting conversation.
Well, that’s a bit dispiriting but thanks for responding anyway. Was this recently or when they were just starting up?
(Apologies for the slight thread hijack here.)
It occurs to me that CFAR’s model of expensive workshops and generous grants to the impoverished (note: I am guessing about the generosity) is likely to produce rather odd demographics: there’s probably a really big gap between (1) the level of wealth/income at which you could afford to go, and (2) the level of wealth/income at which you would feel comfortable going, especially as—see e.g. cursed’s comments in this thread—it’s reasonable to have a lot of doubt about whether they’re worth the cost. (The offer of a refund mitigates that a bit.)
Super-handwavy quantification of the above: I would be really surprised if a typical person whose annual income is $30k or more were eligible for CFAR financial aid. I would be really surprised if a typical person whose income is $150k or less were willing to blow $4k on a CFAR workshop. (NB: “typical”. It’s easy to imagine exceptions.) Accordingly, I would guess that a typical CFAR workshop is attended mostly by people in three categories: impoverished grad students, etc., who are getting big discounts; people on six-figure salaries, many of them quite substantial six-figure salaries; and True Believers who are exceptionally convinced of the value of CFAR-style rationality, and willing to make a hefty sacrifice to attend.
I’m not suggesting that there’s anything wrong with that. In fact, it strikes me as a pretty good recipe for getting an interesting mix of people. But it does mean there’s something of a demographic “hole”.
I rather think there may be demand for a cheaper, less time dependent method of attending. It may be several seasons before they end up back in my country for example. Streaming/recording the whole thing and selling the video package seems like it could still get a lot of the benefits across. Their current strategy only really makes sense to me if they’re still in the testing and refining stage.
I think they are. If everything goes well they will have published papers that proves that their stuff works by the time they move out of the testing and refining stage.
Any idea how long that will be (months, years, decades)?
You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.
CFAR has financial aid.
Also, attending LW meetups and asking about organizing meetups based on instrumental rationality material is cheap and fun.
Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.
I’m on the LW meetups already as it happens. I’m currently attempting to have my local one include more instrumental rationality but I lack a decent guide of what methods work, what techniques to try or what games are fun and useful. For that matter I don’t know what games there are at all beyond a post or two I stumbled upon.
You could ask Metus how much they covered for them, or someone at CFAR how much they’d be willing to cover. The costs for asking are small, and you won’t get anything you don’t ask for.
Fair point, done. On a related note, I wonder how I can practice convincing my brain that failure does not mean death like it did in the old ancestral environment.
Exposure therapy: Fail on small things, then larger ones, where it is obvious that failiure doesn’t mean death. First remember past experiences where you failed and did not die, then go into new situations.
CFAR suggests doing exercises to extend your comfort zone for that purpose.
Even in the ancestral environment, not all failures (I suspect a fairly small proportion of them) meant death.