CFAR-run MIRI Summer Fellows program: July 7-26
CFAR will be running a three week summer program this July for MIRI, designed to increase participants’ ability to do technical research into the superintelligence alignment problem.
The intent of the program is to boost participants as far as possible in four skills:
The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
“Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems”—i.e., the skillset taught in the core LW Sequences. (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
The basics of AI safety-relevant technical research. (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)
The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.
If you’re interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/
Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.
- CFAR 2017 Retrospective by 19 Dec 2017 19:38 UTC; 26 points) (
- 2 May 2015 10:01 UTC; 2 points) 's comment on May Open Thread by (EA Forum;
Are there mathematical prerequisites, e.g. knowing computability/complexity theory?
We’ll be looking for both math aptitude and math knowledge, but there are no formal prereqs. The program will be structured to enable folks with very different initial levels of background skill, CFAR experience, Sequences experience, etc. to teach each other, to separate into different sections when appropriate, and to all be part of a single effort while also all having each of their skill-levels pushed. We expect a diverse group, with different folks initially skilled / new to different components of the work. It should be a lot of fun.
Given the target audience (MIRI Fellows,) you’d be in the minority not knowing it, at the very least.
Advertising this this late is probably suboptimal: I’d expect most people to already have made their summer plans, and arranging to have three weeks off from work/studies/whatever is something that most folks need a lot of advance notice for.
Indeed. The program is a last-minute idea, and we considered waiting until next year for this reason; but it seemed better to get started. And, contrary to my initial fears, interest and applications seem good, so far.
The overton window has shifted on AI risk; this program would not have been planable a year ago. I feel a bad for the folks who are finding out about this late, and who would’ve wanted to come and now have to decide between breaking existing plans and waiting for a future year (if we run these future years); but it still seems good we’re doing it now.
This isn’t a first for CFAR or MIRI—I hope you guys are putting lots of thought into how to have your last-minute ideas earlier :-)
I think it’s partly not doing enough far-advance planning, but also partly just a greater-than-usual willingness to Try Things that seem like good ideas even if the timeline is a bit rushed. That’s how the original minicamp happened, which ended up going so well that it inspired us to develop and launch CFAR.
I know, but something seems not-quite-right about this. If you had all the same events at the same times, but thought of them earlier and so had longer to plan them, you’d be strictly better off. I can think of two constraints that can make rushed timelines like this make sense:
you’re ideas-bound, not resources-bound: there’s little you can do to have ideas any earlier than you already do.
the ideas only make sense to implement in the light of information you didn’t have earlier, so you couldn’t have started acting on them before.
If you’re happy that you’re already pushing these constraints as far as it makes sense to, then I’ll stop moaning :)
Why does this program rely on AI risk being within the Overton window? I would guess that the majority of people interested in this were already interested in AI risk before it went mainstream.
First, because the high-math community seems to contain many who are interested now (and have applied), who it would’ve been harder to interest before. Second, because running such a program for MIRI is more compatible with CFAR’s branding, and CFAR’s ability to appeal to a wide audience, now than before.
The number of applications seem good, or the quality seems good? I can’t help but suspect that better candidates are more likely to have alternate plans.
Both, actually.
Well, I signed up for an interview (probably won’t amount to anything, but it’s too good of an opportunity to just ignore). After signing up though it occurred to me that this might be a US-only deal. Would my being Canadian be a deal-breaker?
Folks from all countries are welcome.
The program is no longer conditional; we’re on; group looks awesome; applications still welcome.
It may help to mention in what way the event is conditional. Summer is a rather valuablr time to many who may attend, and some types of back up plans (internship) are hard to make.
The event is conditional on finding 14+ good participants. Applications are looking good, and I’m optimistic, but it’s not certain yet. We will try to finalize things as soon as we can.