It provides many categories of ‘research skill’ as well as concrete descriptions of what ‘doing really well’ looks like.
Although the advice there is tailored to the specific kind of work Ethan Perez does, I think it broadly applies to many other kinds of ML / AI research in general.
The intended use is for you to self-evaluate periodically and get better at doing alignment research. To that end I also recommend updating the rubric to match your personal priorities.
In the spirit of internalizing Ethan Perez’s tips for alignment research, I made the following spreadsheet, which you can use as a template: Empirical Alignment Research Rubric [public]
It provides many categories of ‘research skill’ as well as concrete descriptions of what ‘doing really well’ looks like.
Although the advice there is tailored to the specific kind of work Ethan Perez does, I think it broadly applies to many other kinds of ML / AI research in general.
The intended use is for you to self-evaluate periodically and get better at doing alignment research. To that end I also recommend updating the rubric to match your personal priorities.
Hope people find this useful!