MIRI Summer Fellows Program
- [Open in Google Maps] [Open in local app]
- 9 August 2019, 2:00 pm—24 August 2019, 11:00 am
- Bodega Bay, CA, USA
- Contact: colm@intelligence.org
CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019.
MSFP is an extended retreat for mathematicians and programmers with a serious interest in making technical progress on the problem of AI alignment. It includes an overview of CFAR’s applied rationality content, a breadth-first grounding in the MIRI perspective on AI safety, and multiple days of actual hands-on research with participants and MIRI staff attempting to make inroads on open questions.
Program Description
The intent of the program is to boost participants, as far as possible, in four overlapping areas:
Doing rationality inside a human brain: understanding, with as much fidelity as possible, what phenomena and processes drive and influence human thinking and reasoning, so that we can account for our own biases and blindspots, better recruit and use the various functions of our brains, and, in general, be less likely to trick ourselves, gloss over our confusions, or fail to act in alignment with our endorsed values.
Epistemic rationality, especially the subset of skills around deconfusion. Building the skill of noticing where the dots don’t actually connect; answering the question “why do we think we know what we think we know?”, particularly when it comes to predictions and assertions around the future development of artificial intelligence.
Grounding in the current research landscape surrounding AI: being aware of the primary disagreements among leaders in the field, and the arguments for various perspectives and claims. Understanding the current open questions, and why different ones seem more pressing or real under different assumptions. Being able to follow the reasoning behind various alignment schemes/theories/proposed interventions, and being able to evaluate those interventions with careful reasoning and mature (or at least more-mature-than-before) intuitions.
Generative research skill: the ability to make real and relevant progress on questions related to the field of AI alignment without losing track of one’s own metacognition. The parallel processes of using one’s mental tools, critiquing and improving one’s mental tools, and making one’s own progress or deconfusion available to others through talks, papers, and models. Anything and everything involved in being the sort of thinker who can locate a good question, sniff out promising threads, and collaborate effectively with others and with the broader research ecosystem.
Food and lodging are provided free of charge at CFAR’s workshop venue in Bodega Bay, California. Participants must be able to remain onsite, largely undistracted for the duration of the program (e.g. no major appointments in other cities, no large looming academic or professional deadlines just after the program).
[5/28/19 Update: Applications closed on March 31, finalists were interviewed between April 1 and April 17, and admissions decisions (yes, no, waitlist) were sent in April.]
If you have any questions or comments, please email Colm at the contact address above, or, if you suspect others would also benefit from reading the answer, post them here.
- How to become an AI safety researcher by 12 Apr 2022 11:33 UTC; 112 points) (EA Forum;
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- My experience on a summer research programme by 22 Sep 2019 9:54 UTC; 50 points) (EA Forum;
- LW Team Updates—September 2019 by 29 Aug 2019 22:12 UTC; 39 points) (
- Predicted AI alignment event/meeting calendar by 14 Aug 2019 7:14 UTC; 29 points) (
- Cartographic Processes by 27 Aug 2019 20:02 UTC; 23 points) (
- How to become an AI safety researcher by 15 Apr 2022 11:41 UTC; 23 points) (
We’re also currently accepting applicants for our AI Risk for Computer Scientists program. While both programs are co-run by MIRI and CFAR staff, and both focus on rationality plus AI safety content, there are some essential differences:
• Length: MSFP runs for two weeks while AIRCS is five days long.
• Frequency: MSFP occurs once a year, while AIRCS happens about once every two months.
• Focus: While MSFP has a larger focus on math, AIRCS has a larger focus on computer science.
• Participants: MSFP is more targeted toward people interested in Embedded Agency sub-problems
Hello. I see that while the deadline has passed, the form is still open. Is it still worthwhile to apply?
We got your application Donald and you’re likely to hear from us soon. We’ll be scheduling our final interviews over the coming days/week.