I’d have Level 1 (AI Safety Fundamentals) be Level 4 or 5, probably. I’m pretty happy to hire engineers who have good ML skills but are rusty on “AI safety fundamentals”; I think they can pick that up on the job much more easily than the coding / ML skills.
Interesting, that is the level that feels most like it doesn’t have a solid place in a linear progression of skills. I wrote “Level 1 kind of happens all the time” to try to reflect this, but I ultimately decided to put it at the start because I feel that for people just starting out it can be a good way to test their fit for AI safety broadly (do they buy the arguments?) and decide whether they want to go down a more theoretical or empirical path. I just added some language to Level 1 to clarify this.
My understanding is that Level 1 is supposed to happen in parallel with the others, but this might be clearer by separating it outside the numbered system entirely, like Level 0 or Level −1 or something.
As for why it’s included as the first step, I think the reasoning is that if someone knows nothing about AI safety, their first question is going to be “Should I actually care about this problem”, and to answer that question they do need to do a little bit of AI safety specific reading.
I agree this could be made clearer—one of the first bits of advice I got when I started asking around about this stuff was “Technical ML ability is harder and takes longer to learn than AI safety knowledge does, so spend most of your time on this as opposed to AI safety” and I remember this being a very unintuitive insight.
I’d have Level 1 (AI Safety Fundamentals) be Level 4 or 5, probably. I’m pretty happy to hire engineers who have good ML skills but are rusty on “AI safety fundamentals”; I think they can pick that up on the job much more easily than the coding / ML skills.
Interesting, that is the level that feels most like it doesn’t have a solid place in a linear progression of skills. I wrote “Level 1 kind of happens all the time” to try to reflect this, but I ultimately decided to put it at the start because I feel that for people just starting out it can be a good way to test their fit for AI safety broadly (do they buy the arguments?) and decide whether they want to go down a more theoretical or empirical path. I just added some language to Level 1 to clarify this.
My understanding is that Level 1 is supposed to happen in parallel with the others, but this might be clearer by separating it outside the numbered system entirely, like Level 0 or Level −1 or something.
As for why it’s included as the first step, I think the reasoning is that if someone knows nothing about AI safety, their first question is going to be “Should I actually care about this problem”, and to answer that question they do need to do a little bit of AI safety specific reading.
I agree this could be made clearer—one of the first bits of advice I got when I started asking around about this stuff was “Technical ML ability is harder and takes longer to learn than AI safety knowledge does, so spend most of your time on this as opposed to AI safety” and I remember this being a very unintuitive insight.