Mostly wanted to say that even though CFAR got maybe “less far” than hoped for, in my view it actually got quite far.
I agree CFAR accomplished some real, good things. I’d be curious to compare our lists (and the list of whoever else wants to weigh in) as to where CFAR got.
On my best guess, CFAR’s positive accomplishments include:
Learning to run workshops where people often “wake up” and are more conscious/alive/able-to-reflect-and-choose, for at least ~4 days or so and often also for a several-month aftermath to a lesser extent;
Helping a bunch of people find each other, who were glad to find each other and who otherwise wouldn’t have met;
Helping the EA scene preserve “care about true beliefs” and “have the kinds of conversations that might help you and others to figure out what’s true” more than they otherwise might’ve, despite massive immigration into that scene making it relatively difficult to preserve values; (I’m not sure which way our effect was here, actually, but my guess is it was helpful);
Helping the EA scene contain “you’re allowed to be a person with insides and opinions and feelings and decisions and interests and hobbies; you don’t have to be only an EA-bot”;
Nudging more people toward reading the Sequences; nudging more people toward “beliefs are for true things” and “problems are for solving”
Several “rationality units” that seem helpful for thinking, particularly:
The “Inner Simulator” unit (helps people track “beliefs” as “what do you actually expect to see happen” rather than as “what sentences do I say in my head”; helps people use these visceral beliefs concretely in planning;
“Goal factoring” (helps people notice that at least sometimes, actions are for the sake of goals; and if you sus out what those goals are, and look separately for ways to optimize for each, you can often do a lot better)
“Double Crux” (and “anti-crux”) (helps people have conversations in which they try cooperatively to figure out the causes of one anothers’ beliefs, and to see if they have evidence that interest one another, instead of taking turns “scoring points” or otherwise talking past one another)
“Focused Grit” (where you spend five minutes, by the clock, actually trying to solve a problem before declaring that problem “impossible”)
Gendlin’s “Focusing”, and “Focusing for research” (helps people notice when the concepts they are using, or the questions they are tackling, are a bit “off”, and helps people tackle the real question instead of pushing words around pretending to tackle the somewhat-fake question. Useful for tackling ‘personal bugs’ such as a feeling of not-quite-rightness in how you’re going about your job or relationship; useful also for doing research. Not invented by CFAR, but shared with plenty of participants by us.)
[Various other things you can find in the CFAR handbook, but these are my top picks.]
“Learning to run workshops where people often “wake up” and are more conscious/alive/able-to-reflect-and-choose, for at least ~4 days or so and often also for a several-month aftermath to a lesser extent”
I permanently upgraded my sense of agency as a result of CFAR workshops. Wouldn’t be surprised if this happened to others too. Would be surprised if it happened to most CFAR participants.
//
I think CFAR’s effects are pretty difficult to see and measure. I think this is the case for most interventions?
I feel like the best things CFAR did were more like… fertilizing the soil and creating an environment where lots of plants could start growing. What plants? CFAR didn’t need to pre-determine that part. CFAR just needed to create a program, have some infrastructure, put out a particular call into the world, and wait for what shows up as a result of that particular call. And then we showed up. And things happened. And CFAR responded. And more things happened. Etc.
CFAR can take partial credit for my life starting from 2015 and onwards, into the future. I’m not sure which parts of it. Shrug.
Maybe I think most people try to slice the cause-effect pie in weird, false ways, and I’m objecting to that here.
1. CFAR managed to create a workshop which is, in my view, reasonably balanced—and subsequently beneficial for most people.
In my view, one of the main problems with “teaching rationality” is people’s minds often have parts which are “broken” in a compatible way, making the whole work. My goto example is “planning fallacy” and “hyperbolic discounting”: because in decision making, typically only a product term of both appears, they can largely cancel out, and practical decisions of someone exhibiting both biases could be closer to optimum than people expect. Teach someone just how to be properly calibrated in planning … and you can make them worse off.
Some of the dimensions to balance I mean here could be labelled eg “S2 getting better S1 data access”, “S2 getting better S1 write access”, “S1 getting better communication channel to S2”, “striving for internal cooperation and kindness”, “get good at reflectivity”, “don’t get lost infinitely reflecting”. (all these labels are fake but useful)
(In contrast, a component which was in my view off-balance is “group rationality”)
This is non-trivial, and I’m actually worried about e.g. various EA community building or outreach events reusing parts of CFAR curriculum, but selecting only parts which e.g. help S2 “rewrite” S1.
II. Impressively good pedagogy of some classes
III. Exploration going on, to a decent degree. At least in Europe, every run was a bit different, both with new classes, but also significant variance between versions of the same class.* (Actually I don’t know if this was true for the US workshops at the same time/the whole time)
IV. Heroic effort to keep good epistemics, which often succeeded
V. In my view some amount of “self-help” is actually helpful.
VI. Container-creation: bringing in interesting groups of people in the same building
VII. Overall, I think the amount of pedagogical knowledge created is impressive, given the size of the org.
“Anti-crux” is where the two parties who’re disagreeing about X take the time to map out the “common ground” that they both already believe, and expect to keep believing, regardless of whether X is true or not. It’s a list of the things that “X or not X?” is not a crux of. Often best done before double-cruxing, or in the middle, as a break, when the double-cruxing gets triggering/disorienting for one or both parties, or for a listener, or for the relationship between the parties.
A common partial example that may get at something of the spirit of this (and an example that people do in the normal world, without calling it “anti-crux”) is when person A has a criticism of e.g. person B’s blog post or something (and is coming to argue about that), but A starts by creating common knowledge that e.g. they respect person B, so that the disagreement won’t seem to be about more than it is.
I agree CFAR accomplished some real, good things. I’d be curious to compare our lists (and the list of whoever else wants to weigh in) as to where CFAR got.
On my best guess, CFAR’s positive accomplishments include:
Learning to run workshops where people often “wake up” and are more conscious/alive/able-to-reflect-and-choose, for at least ~4 days or so and often also for a several-month aftermath to a lesser extent;
Helping a bunch of people find each other, who were glad to find each other and who otherwise wouldn’t have met;
Helping the EA scene preserve “care about true beliefs” and “have the kinds of conversations that might help you and others to figure out what’s true” more than they otherwise might’ve, despite massive immigration into that scene making it relatively difficult to preserve values; (I’m not sure which way our effect was here, actually, but my guess is it was helpful);
Helping the EA scene contain “you’re allowed to be a person with insides and opinions and feelings and decisions and interests and hobbies; you don’t have to be only an EA-bot”;
Nudging more people toward reading the Sequences; nudging more people toward “beliefs are for true things” and “problems are for solving”
Several “rationality units” that seem helpful for thinking, particularly:
The “Inner Simulator” unit (helps people track “beliefs” as “what do you actually expect to see happen” rather than as “what sentences do I say in my head”; helps people use these visceral beliefs concretely in planning;
“Goal factoring” (helps people notice that at least sometimes, actions are for the sake of goals; and if you sus out what those goals are, and look separately for ways to optimize for each, you can often do a lot better)
“Double Crux” (and “anti-crux”) (helps people have conversations in which they try cooperatively to figure out the causes of one anothers’ beliefs, and to see if they have evidence that interest one another, instead of taking turns “scoring points” or otherwise talking past one another)
“Focused Grit” (where you spend five minutes, by the clock, actually trying to solve a problem before declaring that problem “impossible”)
Gendlin’s “Focusing”, and “Focusing for research” (helps people notice when the concepts they are using, or the questions they are tackling, are a bit “off”, and helps people tackle the real question instead of pushing words around pretending to tackle the somewhat-fake question. Useful for tackling ‘personal bugs’ such as a feeling of not-quite-rightness in how you’re going about your job or relationship; useful also for doing research. Not invented by CFAR, but shared with plenty of participants by us.)
[Various other things you can find in the CFAR handbook, but these are my top picks.]
“Learning to run workshops where people often “wake up” and are more conscious/alive/able-to-reflect-and-choose, for at least ~4 days or so and often also for a several-month aftermath to a lesser extent”
I permanently upgraded my sense of agency as a result of CFAR workshops. Wouldn’t be surprised if this happened to others too. Would be surprised if it happened to most CFAR participants.
//
I think CFAR’s effects are pretty difficult to see and measure. I think this is the case for most interventions?
I feel like the best things CFAR did were more like… fertilizing the soil and creating an environment where lots of plants could start growing. What plants? CFAR didn’t need to pre-determine that part. CFAR just needed to create a program, have some infrastructure, put out a particular call into the world, and wait for what shows up as a result of that particular call. And then we showed up. And things happened. And CFAR responded. And more things happened. Etc.
CFAR can take partial credit for my life starting from 2015 and onwards, into the future. I’m not sure which parts of it. Shrug.
Maybe I think most people try to slice the cause-effect pie in weird, false ways, and I’m objecting to that here.
[wrote these points before reading your list]
1. CFAR managed to create a workshop which is, in my view, reasonably balanced—and subsequently beneficial for most people.
In my view, one of the main problems with “teaching rationality” is people’s minds often have parts which are “broken” in a compatible way, making the whole work. My goto example is “planning fallacy” and “hyperbolic discounting”: because in decision making, typically only a product term of both appears, they can largely cancel out, and practical decisions of someone exhibiting both biases could be closer to optimum than people expect. Teach someone just how to be properly calibrated in planning … and you can make them worse off.
Some of the dimensions to balance I mean here could be labelled eg “S2 getting better S1 data access”, “S2 getting better S1 write access”, “S1 getting better communication channel to S2”, “striving for internal cooperation and kindness”, “get good at reflectivity”, “don’t get lost infinitely reflecting”. (all these labels are fake but useful)
(In contrast, a component which was in my view off-balance is “group rationality”)
This is non-trivial, and I’m actually worried about e.g. various EA community building or outreach events reusing parts of CFAR curriculum, but selecting only parts which e.g. help S2 “rewrite” S1.
II. Impressively good pedagogy of some classes
III. Exploration going on, to a decent degree. At least in Europe, every run was a bit different, both with new classes, but also significant variance between versions of the same class.* (Actually I don’t know if this was true for the US workshops at the same time/the whole time)
IV. Heroic effort to keep good epistemics, which often succeeded
V. In my view some amount of “self-help” is actually helpful.
VI. Container-creation: bringing in interesting groups of people in the same building
VII. Overall, I think the amount of pedagogical knowledge created is impressive, given the size of the org.
What’s anti-crux?
“Anti-crux” is where the two parties who’re disagreeing about X take the time to map out the “common ground” that they both already believe, and expect to keep believing, regardless of whether X is true or not. It’s a list of the things that “X or not X?” is not a crux of. Often best done before double-cruxing, or in the middle, as a break, when the double-cruxing gets triggering/disorienting for one or both parties, or for a listener, or for the relationship between the parties.
A common partial example that may get at something of the spirit of this (and an example that people do in the normal world, without calling it “anti-crux”) is when person A has a criticism of e.g. person B’s blog post or something (and is coming to argue about that), but A starts by creating common knowledge that e.g. they respect person B, so that the disagreement won’t seem to be about more than it is.