If you’ve read about alignment research and you want to start contributing, the new iteration of the AI Safety Camp is a great opportunity!
It’s a virtual camp from January to May 2022, where you collaborate with other applicants to work (1h / normal workday, 7h / weekend sprint day) on open problems proposed and supervised by mentors like John Wentworth, Beth Barnes, Stuart Armstrong, Daniel Kokotajlo… Around this core of research, the camp also includes talks and discussions about fundamental ideas in the field, how alignment research works, and how and where to get a job/funding.
All in all, the AI Safety Camp is a great opportunity if:
You have read enough about alignment that you’re convinced of the importance of the problem
You want to do alignment research (whether conceptual or applied), or to collaborate with alignment researchers (doing policy for example)
You don’t feel yet like you have enough research taste and grasp of the field to choose your research problems yourself yet
Note that you don’t need advanced maths skills to participate in the camp, as some of the projects don’t require any specific skillset or very unusual ones (evolutionary genetics, history...). If you care about alignment and are in this situation, I encourage you to apply for a project without required skillsets and learn what you need as you go along.
All the details on how to apply are available on the website (including the list of open problems).
Applications for AI Safety Camp 2022 Now Open!
If you’ve read about alignment research and you want to start contributing, the new iteration of the AI Safety Camp is a great opportunity!
It’s a virtual camp from January to May 2022, where you collaborate with other applicants to work (1h / normal workday, 7h / weekend sprint day) on open problems proposed and supervised by mentors like John Wentworth, Beth Barnes, Stuart Armstrong, Daniel Kokotajlo… Around this core of research, the camp also includes talks and discussions about fundamental ideas in the field, how alignment research works, and how and where to get a job/funding.
All in all, the AI Safety Camp is a great opportunity if:
You have read enough about alignment that you’re convinced of the importance of the problem
You want to do alignment research (whether conceptual or applied), or to collaborate with alignment researchers (doing policy for example)
You don’t feel yet like you have enough research taste and grasp of the field to choose your research problems yourself yet
Note that you don’t need advanced maths skills to participate in the camp, as some of the projects don’t require any specific skillset or very unusual ones (evolutionary genetics, history...). If you care about alignment and are in this situation, I encourage you to apply for a project without required skillsets and learn what you need as you go along.
All the details on how to apply are available on the website (including the list of open problems).