I wonder if it would make sense to make this half-open, in the sense that you would publish on LW links to the study materials, and maybe also some of the results. So that people who didn’t participate have a better idea.
There is no study material since this is not a course. If you are accepted to one of the project teams they you will work on that project.
You can read about the previous research outputs here: Research Outputs – AI Safety Camp
The most famous research to come out of AISC is the coin-run experiment.(95) We Were Right! Real Inner Misalignment—YouTube[2105.14111] Goal Misgeneralization in Deep Reinforcement Learning (arxiv.org)But the projects are different each year, so the best way to get an idea for what it’s like is just to read the project descriptions.
I wonder if it would make sense to make this half-open, in the sense that you would publish on LW links to the study materials, and maybe also some of the results. So that people who didn’t participate have a better idea.
There is no study material since this is not a course. If you are accepted to one of the project teams they you will work on that project.
You can read about the previous research outputs here: Research Outputs – AI Safety Camp
The most famous research to come out of AISC is the coin-run experiment.
(95) We Were Right! Real Inner Misalignment—YouTube
[2105.14111] Goal Misgeneralization in Deep Reinforcement Learning (arxiv.org)
But the projects are different each year, so the best way to get an idea for what it’s like is just to read the project descriptions.