[Context: I’m Matthew Graves, and currently handle a lot of MIRI’s recruiting, but this is not MIRI’s official view.]
We’re hiring engineers to help with our research program, which doesn’t require extensive familiarity with AI alignment research.
When reading through research guides, it’s better to take a breadth-first approach where you only go deep on the things that are interesting to you, and not worry too much about consuming all of it before you start talking with people about it. Like with software projects, it’s often better to find some open problem that’s interesting and then learn the tools you need to tackle that problem, rather than trying to build a general-purpose toolkit and then find a problem to tackle.
There are some online forums where you can bounce ideas and plans off of; LW is historically a decent place for this, as are Facebook groups like AI Safety Open Discussion. I expect there to be more specialty CFAR workshops this year, but encourage you to get started on stuff now rather than waiting for one. There’s also people like me at MIRI and other orgs who field these sorts of questions. I encourage you to contact us too early instead of too late; the worst-case scenario is that we give you a stock email with links we think will be helpful rather than you eat a huge amount of our time. (For getting detailed reactions to plans, I suspect posting to a group where several different people might have the time to respond to it soon would work better than emailing only one person about it, and it’s considerably easier to answer specific questions than it is to give a general reaction to a plan.)
[Context: I’m Matthew Graves, and currently handle a lot of MIRI’s recruiting, but this is not MIRI’s official view.]
We’re hiring engineers to help with our research program, which doesn’t require extensive familiarity with AI alignment research.
When reading through research guides, it’s better to take a breadth-first approach where you only go deep on the things that are interesting to you, and not worry too much about consuming all of it before you start talking with people about it. Like with software projects, it’s often better to find some open problem that’s interesting and then learn the tools you need to tackle that problem, rather than trying to build a general-purpose toolkit and then find a problem to tackle.
There are some online forums where you can bounce ideas and plans off of; LW is historically a decent place for this, as are Facebook groups like AI Safety Open Discussion. I expect there to be more specialty CFAR workshops this year, but encourage you to get started on stuff now rather than waiting for one. There’s also people like me at MIRI and other orgs who field these sorts of questions. I encourage you to contact us too early instead of too late; the worst-case scenario is that we give you a stock email with links we think will be helpful rather than you eat a huge amount of our time. (For getting detailed reactions to plans, I suspect posting to a group where several different people might have the time to respond to it soon would work better than emailing only one person about it, and it’s considerably easier to answer specific questions than it is to give a general reaction to a plan.)