I wrote the following comment during this AMA back in 2019, but didn’t post it because of the reasons that I note in the body of the comment.
I still feel somewhat unsatisfied with what I wrote. I think something about the tone feels wrong, or gives the wrong impression, somehow. Or maybe this only presents part of the story. But it still seems better to say aloud than not.
I feel more comfortable posting it now, since I’m currently early in the process of attempting to build an organization / team that does meet these standards. In retrospect, I think probably it would have been better if I had just posted this at the time, and hashed out some disagreements with others in the org in this thread.
(In some sense this comment is useful mainly as bit of a window into the kind of standards that I, personally, hold a rationality-development / training organization to.)
My original comment is reproduced verbatim below (plus a few edits for clarity).
I feel trepidation about posting this comment, because it seems in bad taste to criticize a group, unless one is going to step up and do the legwork to fix the problem. This is one of the top 5 things that bothers me about CFAR, and maybe I will step up to fix it at some point, but I’m not doing that right now and there are a bunch of hard problems that people are doing diligent work to fix. Criticizing is cheap. Making things better is hard.
[edit 2023: I did run a year long CFAR instructor training that was explicitly designed to take steps on this class of problems though. It is not as if I was just watching from the sidelines. But shifting the culture of even a small org, especially from a non-executive role, is pretty difficult, and my feeling is that I made real progress in the direction that I wanted, but only about one twentieth of the way to what I would think is appropriate.]
My view is that CFAR does not meaningfully eat its own dogfood, or at least doesn’t enough, and that this hurts the organization’s ability to achieve its goals.
This is not to contradict the anecdotes that others have left here, which I think are both accurate presentations, and examples of good (even inspiring) actions. But while some members of CFAR do have personal practices (with varying levels of “seriousness”) in correct thought and effective action, CFAR, as an institution, doesn’t really make much use of rationality. I resonate strongly with Duncan’s comment about counting up vs. counting down.
More specific data, both positive and negative:
CFAR did spend some 20 hours of staff meeting time Circling in 2017, separately from a ~50 hour CFAR circling retreat the most of the staff participated in, and various other circling events that CFAR staff attended together (but were not “run by CFAR”).
I do often observe people doing Focusing moves and Circling moves in meetings.
I have observed occasional full explicit Double Crux conversations on the order of three or four times a year.
I frequently (on the order of once every week or two) observe CFAR staff applying the Double Crux moves (offering cruxes, crux checking, operationalizing, playing the Thursday-Friday game) in meetings and in conversation with each other.
Group goal-factoring has never happened, to the best of my knowledge, even though there are a number of things that happen at CFAR that seem very inefficient, seem like “shoulds”, or are frustrating / annoying to at least one person [edit 2023: these are explicit triggers for goal factoring]. I can think of only one instance in which two of us (Tim and I, specifically) tried to goal-factor something (a part of meetings that some of us hate).]
We’ve never had an explicit group pre-mortem, to the best of my knowledge. There is the occasional two-person session of simulating a project (usually workshop or workshop activity), and the ways in which it goes wrong. [edit 2023: Anna said that she had participated in many long form postmortems regarding hiring in particular, when I sent her a draft of this comment in 2019.]
There is no infrastructure for tracking predictions or experiments. Approximately, CFAR as an institution doesn’t really run [formal] experiments, at least experiments with results that are tracked by anything other than the implicit intuitions of the staff. [edit 2023: some key features of a “formal experiment” as I mean it are writing down predictions in advance, and having a specific end date at which the group reviews the results. This is in contrast to simply trying new ideas sometimes.]
There is no explicit processes for iterating on new policies or procedures (such as iterating on how meetings are run).
[edit 2023: An example of an explicit process for iterating on policies and procedures is maintaining a running document for a particular kind of meeting. Every time you have that kind of meeting, you start by referring to the notes from the last session. You try some specific procedural experiments, and then end the meeting with five minutes of reflection on what worked well or poorly, and log those in the document. This way you are explicitly trying new procedures and capturing the results, instead finding procedural improvements mainly by stumbling into them, and often forgetting improvements rather than integrating and building upon them. I use documents like this for my personal procedural iteration.
Or in Working Backwards, the authors describe not just organizational innovations that Amazon came up with to solve explicitly-noted organizational problems, but the sequence of iteration that led to those final form innovations.]
There is informal, but effective, iteration on the workshops. The processes that run CFAR’s internals however, seem to me to be mostly stagnant [edit 2023: in the sense that there’s not deliberate intentional effort on solving long-standing institutional frictions, or developing more effective procedures for doing things.]
As far as I know, there are no standardized checklists for employing CFAR techniques in relevant situations (like starting a new project). I wouldn’t be surprised if there were some ops checklists with a murphyjitsu step. I’ve never seen a checklist for a procedure at CFAR, excepting some recurring shopping lists for workshops.
The interview process does not incorporate the standard research about interviews and assessment contained in Thinking, Fast and Slow. (I might be wrong about this. I, blessedly, don’t have to do admissions interviews.)
No strategic decision or choice to undertake a project, that I’m aware of, has involved quantitative estimates of impact, or quantitative estimates of any kind. (I wouldn’t be surprised if the decision to run the first MSFP did, [edit 2023: but I wasn’t at CFAR at the time. My guess is that there wasn’t.])
Historically, strategic decisions were made to a large degree by inertia. This is more resolved now, but for a period of several years, I think most of the staff didn’t really understand why we were running mainlines, and in fact when people [edit 2023: workshop participants] asked about this, we would say things like “well, we’re not sure what else to do instead.” This didn’t seem unusual, and didn’t immediately call out for goal factoring.
There’s not designated staff training time for learning or practicing the mental skills, or for doing general tacit knowledge transfer between staff. However, Full time CFAR staff have historically had a training budget, which they could spend on whatever personal development stuff they wanted, at their own discretion.
CFAR does have a rule that you’re allowed / mandated to take rest days after a workshop, since the workshop eats into your weekend.
Overall, CFAR strikes me as a mostly a normal company, populated by some pretty weird hippy-rationalists. There aren’t any particular standards that the employees are expected to use rationality techniques, nor institutional procedures for doing rationality [edit 2023: as distinct from having shared rationality-culture].
This is in contrast to say, Bridgewater associates, which is clearly structured intentionally to enable updating and information processing, on the organizational level. (Incidentally, Bridgewater is rich in the most literal sense.)
Also, I’m not fully exempt from these critiques myself: I have not really internalized goal factoring, yet, for instance, and think that I personally, am making the same kind of errors of inefficient action that I’m accusing CFAR of making. I also don’t make much use of quantitative estimates, and I have lots of empirical iteration procedures, but haven’t really gotten the hang of doing explicit experiments. (I do track decisions and predictions though, for later review.)
Overall, I think this gap is about due 10% “these tools don’t work as well, especially at the group level, as we seem to credit them, and we are correct to not use them”, about 30% to this being harder to do than it seems, and about 60% due to CFAR not really trying at this (and maybe it shouldn’t be trying at this, because there are trade offs and other things to focus on).
Elaborating on the 30%: I do think that making an org like this, especially when not starting from scratch, is deceptively difficult. I think that while implementing some of these seems trivial on the surface, but that it actually entails a shift in culture and expectations, and doing this effectively requires leadership and institution-building skills that CFAR doesn’t currently have. Like, if I imagine something like this existing, it would need to have a pretty in depth onboarding process for new employees, teaching the skills, and presenting “how we do things here.” If you wanted to bootstrap into this kind of culture, at anything like a fast enough speed, you would need the same kind of on-boarding for all of the existing employees, but it would be even harder, because you wouldn’t have the culture already going to provide example and immersion.
I wrote the following comment during this AMA back in 2019, but didn’t post it because of the reasons that I note in the body of the comment.
I still feel somewhat unsatisfied with what I wrote. I think something about the tone feels wrong, or gives the wrong impression, somehow. Or maybe this only presents part of the story. But it still seems better to say aloud than not.
I feel more comfortable posting it now, since I’m currently early in the process of attempting to build an organization / team that does meet these standards. In retrospect, I think probably it would have been better if I had just posted this at the time, and hashed out some disagreements with others in the org in this thread.
(In some sense this comment is useful mainly as bit of a window into the kind of standards that I, personally, hold a rationality-development / training organization to.)
My original comment is reproduced verbatim below (plus a few edits for clarity).