CFAR and MIRI are running a free AI Summer Fellows Program (AISFP) in the San Francisco Bay Area from June 27 to July 14. Aimed at increasing participants’ abilities to do technical research in AI alignment, the program includes CFAR’s applied rationality content and practice in doing technical research toward AI Safety with MIRI researchers and (20-24) other participants.
Program Description
AISFP, a two week summer program designed to increase participants’ ability to do technical research into the AI alignment problem, will take place in the San Francisco Bay Area from June 27 to July 14.
The intent of the program is to boost participants, as far as possible, in four skills:
The CFAR applied rationality skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops.
Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems — i.e., the skillset taught in the core LW Sequences (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words).
Technical forecasting in AI as well as AI alignment interventions (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
The ability to do AI alignment-relevant technical research, while reflecting on the cognitive habits involved. We will give crash courses in: reflection, logical uncertainty, and decision theory.
Finalists will be contacted by a MIRI staff member for an interview.
AI Summer Fellows Program
CFAR and MIRI are running a free AI Summer Fellows Program (AISFP) in the San Francisco Bay Area from June 27 to July 14. Aimed at increasing participants’ abilities to do technical research in AI alignment, the program includes CFAR’s applied rationality content and practice in doing technical research toward AI Safety with MIRI researchers and (20-24) other participants.
Program Description
AISFP, a two week summer program designed to increase participants’ ability to do technical research into the AI alignment problem, will take place in the San Francisco Bay Area from June 27 to July 14.
The intent of the program is to boost participants, as far as possible, in four skills:
The CFAR applied rationality skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops.
Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems — i.e., the skillset taught in the core LW Sequences (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words).
Technical forecasting in AI as well as AI alignment interventions (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
The ability to do AI alignment-relevant technical research, while reflecting on the cognitive habits involved. We will give crash courses in: reflection, logical uncertainty, and decision theory.
Finalists will be contacted by a MIRI staff member for an interview.