If CFAR will be discontinuing/de-emphasizing rationality workshops for the general educated public, then I’d like to see someone else take up that mantle, and I’d hope that CFAR would make it easy for such a startup to build on what they’ve learned so far.
We’ll be continuing the workshops, at least for now, with less direct focus, but with probably a similar amount of net development time going into them even if the emphasis is on more targeted programs. This is partly because we value the existence of an independent rationality community (varied folks doing varied things adds to the art and increases its integrity), and partly because we’re still dependent on the workshop revenue for part of our operating budget.
Re: others taking up the mantel: we are working to bootstrap an instructor training; have long been encouraging our mentors and alumni to run their own thingies; and are glad to help others do so. Also Kaj Sotala seems to be developing some interesting training thingies designed to be shared.
Feedback from someone who really enjoyed your May workshop (and I gave this same feedback then, too): Part of the reason I was willing to go to CFAR was that it is separate (or at least pretends to be separate, even though they share personnel and office space) from MIRI. I am 100% behind rationality as a project but super skeptical of a lot of the AI stuff that MIRI does (although I still follow it because I do find it interesting, and a lot of smart people clearly believe strongly in it so I’m prepared to be convinced.) I doubt I’m the only one in this boat.
Also, I’m super uncomfortable being associated with AI safety stuff on a social level because it has a huge image problem. I’m barely comfortable being associated with “rationality” at all because of how closely associated it is (in my social group, at least) with AI safety’s image problem. (I don’t exaggerate when I say that my most-feared reaction to telling people I’m associated with “rationalists” is “oh, the basilisk people?”)
If CFAR will be discontinuing/de-emphasizing rationality workshops for the general educated public, then I’d like to see someone else take up that mantle, and I’d hope that CFAR would make it easy for such a startup to build on what they’ve learned so far.
We’ll be continuing the workshops, at least for now, with less direct focus, but with probably a similar amount of net development time going into them even if the emphasis is on more targeted programs. This is partly because we value the existence of an independent rationality community (varied folks doing varied things adds to the art and increases its integrity), and partly because we’re still dependent on the workshop revenue for part of our operating budget.
Re: others taking up the mantel: we are working to bootstrap an instructor training; have long been encouraging our mentors and alumni to run their own thingies; and are glad to help others do so. Also Kaj Sotala seems to be developing some interesting training thingies designed to be shared.
Feedback from someone who really enjoyed your May workshop (and I gave this same feedback then, too): Part of the reason I was willing to go to CFAR was that it is separate (or at least pretends to be separate, even though they share personnel and office space) from MIRI. I am 100% behind rationality as a project but super skeptical of a lot of the AI stuff that MIRI does (although I still follow it because I do find it interesting, and a lot of smart people clearly believe strongly in it so I’m prepared to be convinced.) I doubt I’m the only one in this boat.
Also, I’m super uncomfortable being associated with AI safety stuff on a social level because it has a huge image problem. I’m barely comfortable being associated with “rationality” at all because of how closely associated it is (in my social group, at least) with AI safety’s image problem. (I don’t exaggerate when I say that my most-feared reaction to telling people I’m associated with “rationalists” is “oh, the basilisk people?”)