I have similar feelings, there’s not a clear path for someone in an adjacent field. I chose my current role largely based on the expected QALYs, and I’d gladly move into AI Safety now for the same reason.
This post gives the impression that finding talent is not the current constraint, but I’m confused about why the listed salaries are so high for some of these roles if the pool is so large.
I’ve submitted applications to a few of these orgs, with cover letters that basically say “I’m here and willing if you need my skills”. One frustration is recognizing Alignment as our greatest challenge, and not having a path to go work on it. Another is that the current labs look somewhat homogeneous and a lot like academia, which is not how I’d optimize for speed.
I have similar feelings, there’s not a clear path for someone in an adjacent field. I chose my current role largely based on the expected QALYs, and I’d gladly move into AI Safety now for the same reason.
This post gives the impression that finding talent is not the current constraint, but I’m confused about why the listed salaries are so high for some of these roles if the pool is so large.
I’ve submitted applications to a few of these orgs, with cover letters that basically say “I’m here and willing if you need my skills”. One frustration is recognizing Alignment as our greatest challenge, and not having a path to go work on it. Another is that the current labs look somewhat homogeneous and a lot like academia, which is not how I’d optimize for speed.