Why isn’t CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one’s confidence in propositions. Multiple times I’ve seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).
Relatedly: Why doesn’t CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said “you don’t have to do anything”. This doesn’t make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like ‘you must empty your cup so you can learn the material’ but I’m not sure.
I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that’s a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).
Some of what a CFAR workshop does is convince our system 1′s that it’s socially safe to be honest about having some unflattering motives.
Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.
Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That’s different from “you don’t have to do anything”.
Most of the difficulties I’ve had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some “learned helplessness” about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I’ve only experienced in reaction to observing people around me being in appropriate moods.
Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.
I think it comes down to a combination of 1) not being very confident that CFAR has the True Material yet, and 2) not being very confident in CFAR’s ability to correct misconceptions in any format other than teaching in-person workshops. That is, you might imagine that right now CFAR has some Okay Material, but that teaching it in any format other than in person risks misunderstandings where people come away with some Bad Material, and that neither of these is what we really want, which is for people to come away with the True Material, whatever that is. There’s at least historically been a sense that one of the only ways to get people to approximately come away with the True Material is for them to actually talk in person to the instructors, who maybe have some of it in their heads but not quite in the CFAR curriculum yet.
(This is based on a combination of talking to CFAR instructors and volunteering at workshops.)
It’s also worth pointing out that CFAR is incredibly talent-constrained as an organization, and that there are lots of things CFAR could do that would plausibly be a good idea and that CFAR might even endorse as plausibly a good idea but that they just don’t have the person-hours to prioritize.
I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern.
At the mainstream workshops there is no mention of a topic anywhere in the neighborhood of AI safety anywhere in the curriculum. If it comes up at all it’s in informal conversations.
Could a possible solution be to teach new teachers?
How far is a person who “knows X” from a person who “can teach X”? I imagine that being able to teach X has essentially two requirements: First, understand X deeply—which is what we want to achieve anyway. Second, generalteaching skills, independent on X—these could be taught as a separate package; which could already be interesting for people who teach. And what you need then, is a written material containing all known things that should be considered when teaching X, and a short lesson explaining the details of it.
The plan could be approximately this:
1) We already have lessons for X, for Y, for Z—what CFAR offers to participants already.
2) Make lessons for teaching in general—and offer them to participants, too, because that is a separately valuable product.
3) Make lessons on “how to teach X” etc., each of them requiring lessons for “X” and for “general teaching” as prerequisites. These will be for volunteers wanting to help CFAR. After the lessons, have the volunteers teach X to some random audience (for a huge discount or even for free). If the volunteer does it well, let them teach X at CFAR workshops; first with some supervision and feedback, later alone.
Yep, CFAR is training new instructors (I’m one of them).
I imagine that being able to teach X has essentially two requirements: First, understand X deeply—which is what we want to achieve anyway. Second, general teaching skills, independent on X—these could be taught as a separate package; which could already be interesting for people who teach.
In the education literature, these are called content knowledge and pedagogical knowledge respectively. There is an important third class of thing called pedagogical content knowledge, which refers to specific knowledge about how to teach X.
The example Val likes to use is that if you want to teach elementary school students about division, it’s really important to know that there are two conceptually distinct kinds of division, namely equal sharing (you have 12 apples, you want to share them with 4 friends, how many apples per friend) and repeated subtraction (you have 12 apples, you have gift bags that fit 4 apples, how many bags can you make). This is not quite a fact about division, nor is it general teaching skill; it is specifically part of what you need to know to teach division.
There’s a difference between “knowing X” and having X be a default behavior. There’s also a difference between knowing X and being able to teach it to people who think differently than oneself or have different preconceptions.
Relevant info: I’ve volunteered at 1 CFAR workshop and hang out in the CFAR office periodically. My views here represent my models of how I think CFAR is thinking and are my own.
For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I’m hoping to start some discussion on what an actual interface might look like.
Related to the idea of a prep course, I’ll be making a LW post in the next few days about my attempt to create a new sequence on instrumental rationality that is complementary to the sort of self-improvement CFAR does. That may be of interest to you.
Otherwise, I can say that at least at the workshop I was at, there was zero mention of AI safety from the staff. (You can read my review here). It’s my impression that there’s a lot of cool stuff CFAR could be doing in tandem w/ their workshops, but they’re time constrained. Hopefully this becomes less so w/ their new hires.
I do agree that having additional scaffolding would be very good, and that’s part of my motivation to start on a new sequence.
Happy to talk more on this as I also think this is an important thing to focus on.
For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I’m hoping to start some discussion on what an actual interface might look like.
Yep, you were one of the parties I was thinking of. Nice work! :D
I can’t really speak for CFAR’s plans or motives though. Last I heard they were still in an experimental phase and weren’t confident enough in their material to go public with it in a big way yet. Has this changed?
The first reason of why isn’t CFAR doing X is CFAR thinks other things besides X are more important targets for their effort.
At the beginning, CFAR considered probability calibration very important. As far as I understand today they consider it less important and a variety of other mental skills more important. As a result I think they decided against spending more resources on the Credence game.
As far as a Double Crux app goes, it’s a project that somebody could do but I’m not sure that CFAR is the best actor to do it. If Arbital does it and tries to build a community around it, that might be higher return.
As far as I understand CFAR chooses to spend effort on optimizing the post-workshop experience with weekly excercises. I can understand that they might belief that’s more likely to provide good returns then focusing on the pre-workshop experience.
Why isn’t CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one’s confidence in propositions. Multiple times I’ve seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).
Relatedly: Why doesn’t CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said “you don’t have to do anything”. This doesn’t make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like ‘you must empty your cup so you can learn the material’ but I’m not sure.
I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that’s a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).
Some of what a CFAR workshop does is convince our system 1′s that it’s socially safe to be honest about having some unflattering motives.
Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.
Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That’s different from “you don’t have to do anything”.
Most of the difficulties I’ve had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some “learned helplessness” about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I’ve only experienced in reaction to observing people around me being in appropriate moods.
Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.
I think it comes down to a combination of 1) not being very confident that CFAR has the True Material yet, and 2) not being very confident in CFAR’s ability to correct misconceptions in any format other than teaching in-person workshops. That is, you might imagine that right now CFAR has some Okay Material, but that teaching it in any format other than in person risks misunderstandings where people come away with some Bad Material, and that neither of these is what we really want, which is for people to come away with the True Material, whatever that is. There’s at least historically been a sense that one of the only ways to get people to approximately come away with the True Material is for them to actually talk in person to the instructors, who maybe have some of it in their heads but not quite in the CFAR curriculum yet.
(This is based on a combination of talking to CFAR instructors and volunteering at workshops.)
It’s also worth pointing out that CFAR is incredibly talent-constrained as an organization, and that there are lots of things CFAR could do that would plausibly be a good idea and that CFAR might even endorse as plausibly a good idea but that they just don’t have the person-hours to prioritize.
At the mainstream workshops there is no mention of a topic anywhere in the neighborhood of AI safety anywhere in the curriculum. If it comes up at all it’s in informal conversations.
Could a possible solution be to teach new teachers?
How far is a person who “knows X” from a person who “can teach X”? I imagine that being able to teach X has essentially two requirements: First, understand X deeply—which is what we want to achieve anyway. Second, general teaching skills, independent on X—these could be taught as a separate package; which could already be interesting for people who teach. And what you need then, is a written material containing all known things that should be considered when teaching X, and a short lesson explaining the details of it.
The plan could be approximately this:
1) We already have lessons for X, for Y, for Z—what CFAR offers to participants already.
2) Make lessons for teaching in general—and offer them to participants, too, because that is a separately valuable product.
3) Make lessons on “how to teach X” etc., each of them requiring lessons for “X” and for “general teaching” as prerequisites. These will be for volunteers wanting to help CFAR. After the lessons, have the volunteers teach X to some random audience (for a huge discount or even for free). If the volunteer does it well, let them teach X at CFAR workshops; first with some supervision and feedback, later alone.
Yep, CFAR is training new instructors (I’m one of them).
In the education literature, these are called content knowledge and pedagogical knowledge respectively. There is an important third class of thing called pedagogical content knowledge, which refers to specific knowledge about how to teach X.
The example Val likes to use is that if you want to teach elementary school students about division, it’s really important to know that there are two conceptually distinct kinds of division, namely equal sharing (you have 12 apples, you want to share them with 4 friends, how many apples per friend) and repeated subtraction (you have 12 apples, you have gift bags that fit 4 apples, how many bags can you make). This is not quite a fact about division, nor is it general teaching skill; it is specifically part of what you need to know to teach division.
If you look at their staff page you find they’ve recently trained like 10 new teachers.
Sounds like great news. Or perhaps, an experiment with potentially high value.
There’s a difference between “knowing X” and having X be a default behavior. There’s also a difference between knowing X and being able to teach it to people who think differently than oneself or have different preconceptions.
Promising pilot projects often don’t scale
Relevant info: I’ve volunteered at 1 CFAR workshop and hang out in the CFAR office periodically. My views here represent my models of how I think CFAR is thinking and are my own.
For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I’m hoping to start some discussion on what an actual interface might look like.
Related to the idea of a prep course, I’ll be making a LW post in the next few days about my attempt to create a new sequence on instrumental rationality that is complementary to the sort of self-improvement CFAR does. That may be of interest to you.
Otherwise, I can say that at least at the workshop I was at, there was zero mention of AI safety from the staff. (You can read my review here). It’s my impression that there’s a lot of cool stuff CFAR could be doing in tandem w/ their workshops, but they’re time constrained. Hopefully this becomes less so w/ their new hires.
I do agree that having additional scaffolding would be very good, and that’s part of my motivation to start on a new sequence.
Happy to talk more on this as I also think this is an important thing to focus on.
Yep, you were one of the parties I was thinking of. Nice work! :D
I’m also interested in developing my instrumental rationality, and I think many of us are. Some may not have noticed CFAR’s resources pages: Reading List; Rationality Videos; Blog Updates; and Rationality Checklist
They do update these from time to time.
I can’t really speak for CFAR’s plans or motives though. Last I heard they were still in an experimental phase and weren’t confident enough in their material to go public with it in a big way yet. Has this changed?
The first reason of why isn’t CFAR doing X is CFAR thinks other things besides X are more important targets for their effort.
At the beginning, CFAR considered probability calibration very important. As far as I understand today they consider it less important and a variety of other mental skills more important. As a result I think they decided against spending more resources on the Credence game.
As far as a Double Crux app goes, it’s a project that somebody could do but I’m not sure that CFAR is the best actor to do it. If Arbital does it and tries to build a community around it, that might be higher return.
As far as I understand CFAR chooses to spend effort on optimizing the post-workshop experience with weekly excercises. I can understand that they might belief that’s more likely to provide good returns then focusing on the pre-workshop experience.
CFAR staff did publish http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/ and http://lesswrong.com/lw/o2k/flinching_away_from_truth_is_often_about/ but I guess writing concepts down in that way takes a lot of effort.