It seems like the benefit of CFAR’s camp, at least for you, has less to do with the techniques they teach than with the general value of being around intelligent, intentional, like-minded people. That is not a bad thing, but is probably not exactly the sort of benefit they’re aiming for.
IMO, this misses the causes of Qiaochu’s subsequent shifts: the thing he describes getting is the thing we’re aiming for, and it somehow seems to happen much more when folks attend a CFAR workshop than when folks spend a similar amount of time with similarly intelligent people in other contexts.
The thing we’re trying to teach at CFAR isn’t the techniques, but is taught via teaching the techniques. This is perhaps best explained by analogy, as follows:
In computer science, when a person learns their first programming language, they learn it via learning to use a set of particular functions in a particular language (e.g., learning the syntax of for loops in language X) and doing related exercises. But the change that happens in the programming student in somehow a harder to name shift toward being able to “think like a computer scientist”; we know this because, when the student later learns further programming languages, it takes them fewer weeks to learn it, and they are more able to generate solutions to new problems in the new languages.
What we now say at the workshops’ opening session is that the techniques folks are about to learn aren’t the skills that e.g. the CFAR instructors actually use, but that they form components of a “soup” that we do actually use—they are training exercises that help to teach something harder to phrase, that involves components of the techniques used in a more fluid way, and that also involves the general system 1 expectation that problems are soluble, that difficult or magical-looking skills are secretly made up of simple components, that you yourself are made of components that are simpler and sillier than you might think (and that it’s agenty to acknowledge that and plan training exercises for yourself, instead of expecting to ‘just use your freewill’), etc.
it somehow seems to happen much more when folks attend a CFAR workshop than when folks spend a similar amount of time with similarly intelligent people in other contexts
The similarly intelligent people are not necessarily rational. You could find hundreds of highly intelligent people at any university; a dozen of them would be extremely intelligent. But most of them seem like they have no desire to self-improve (generally; not just in their knowledge of the subject they specialize in); although they may profess that self-improvement is a good and noble goal. Actually, the mere fact that they already are successful in what they do, may alleviate their desire to improve.
Meeting intelligent and epistemically rational and instrumentally rational people… is still probably better in a context that makes it obvious that one is supposed to learn from them. If nothing else, the students are not ashamed to ask.
IMO, this misses the causes of Qiaochu’s subsequent shifts: the thing he describes getting is the thing we’re aiming for
I stand corrected. Thanks. The programming analogy helps; I’m in IT and I’m familiar with the phenomenon you describe.
Qiaochu Yuan noted in the post that he’s a local, and had regular post-workshop meatspace contact with CFAR personnel. It would be interesting to compare his experience to those who travel in from out of town.
and that also involves the general system 1 expectation that problems are soluble, that difficult or magical-looking skills are secretly made up of simple components
This is a wonderful description of something I usually take for granted, and sometimes get incredibly confused by people who don’t. It feels like a natural counterpart to the thought pjeby expressed in this post.
What we now say at the workshops’ opening session is that the techniques folks are about to learn aren’t the skills that e.g. the CFAR instructors actually use, but that they form components of a “soup” that we do actually use
Yes, this is what I was attempting to say. Thanks for phrasing it so concisely!
the general system 1 expectation that problems are soluble, that difficult or magical-looking skills are secretly made up of simple components, that you yourself are made of components that are simpler and sillier than you might think (and that it’s agenty to acknowledge that and plan training exercises for yourself, instead of expecting to ‘just use your freewill’), etc.
IMO, this misses the causes of Qiaochu’s subsequent shifts: the thing he describes getting is the thing we’re aiming for, and it somehow seems to happen much more when folks attend a CFAR workshop than when folks spend a similar amount of time with similarly intelligent people in other contexts.
The thing we’re trying to teach at CFAR isn’t the techniques, but is taught via teaching the techniques. This is perhaps best explained by analogy, as follows:
In computer science, when a person learns their first programming language, they learn it via learning to use a set of particular functions in a particular language (e.g., learning the syntax of for loops in language X) and doing related exercises. But the change that happens in the programming student in somehow a harder to name shift toward being able to “think like a computer scientist”; we know this because, when the student later learns further programming languages, it takes them fewer weeks to learn it, and they are more able to generate solutions to new problems in the new languages.
What we now say at the workshops’ opening session is that the techniques folks are about to learn aren’t the skills that e.g. the CFAR instructors actually use, but that they form components of a “soup” that we do actually use—they are training exercises that help to teach something harder to phrase, that involves components of the techniques used in a more fluid way, and that also involves the general system 1 expectation that problems are soluble, that difficult or magical-looking skills are secretly made up of simple components, that you yourself are made of components that are simpler and sillier than you might think (and that it’s agenty to acknowledge that and plan training exercises for yourself, instead of expecting to ‘just use your freewill’), etc.
The similarly intelligent people are not necessarily rational. You could find hundreds of highly intelligent people at any university; a dozen of them would be extremely intelligent. But most of them seem like they have no desire to self-improve (generally; not just in their knowledge of the subject they specialize in); although they may profess that self-improvement is a good and noble goal. Actually, the mere fact that they already are successful in what they do, may alleviate their desire to improve.
Meeting intelligent and epistemically rational and instrumentally rational people… is still probably better in a context that makes it obvious that one is supposed to learn from them. If nothing else, the students are not ashamed to ask.
I stand corrected. Thanks. The programming analogy helps; I’m in IT and I’m familiar with the phenomenon you describe.
Qiaochu Yuan noted in the post that he’s a local, and had regular post-workshop meatspace contact with CFAR personnel. It would be interesting to compare his experience to those who travel in from out of town.
This is a wonderful description of something I usually take for granted, and sometimes get incredibly confused by people who don’t. It feels like a natural counterpart to the thought pjeby expressed in this post.
Yes, this is what I was attempting to say. Thanks for phrasing it so concisely!
Also this!