The Fund for Alignment Research is a new organization to help AI safety researchers, primarily in academia, pursue high-impact research by hiring contractors. They’re a group of researchers affiliated with the Center for Human-Compatible AI at UC Berkeley and other labs like Jacob Steinhardt’s at UC Berkeley and David Krueger’s at Cambridge. They are hiring for:
Research Engineer (20–40 hours/week, remote or in Berkeley, $50–100/hour) – looking for 2–3 individuals with significant software engineering experience or experience applying machine learning methods.
Communications Specialist and Senior Communications Specialist (10–40 hours/week, remote or in Berkeley, $30–80/hour) – communicating high-impact AI safety research. This could be via technical writing/editing, graphics design, web design, presentation development, social media management, etc.
If you have any questions about the role, please contact them at hello@alignmentfund.org.
Appreciate the recommendation. Around April 1st I decided that the “work remotely for an alignment org” thing probably wouldn’t work out the way I wanted it to, and switched to investigating “on-site” options—I’ll write up a full post on that when I’ve either succeeded or failed on that score.
On a mostly unrelated note, every time I see an EA job posting that pays at best something like 40-50% of what qualified candidates would get in the industry, I feel that collide with the “we are not funding constrained” messaging. I understand that there are reasons why EA orgs may not want to advertise themselves as paying top-of-market, but nobody’s outright said that’s what’s going on, and there could be other less-visible bottlenecks that I haven’t observed yet.
For what it’s worth I was in a similar boat, I’ve long wanted to work on applied alignment, but also stay in Australia for family reasons. Each time I changed job I’ve made the same search as you, and ended up just getting a job where I can apply some ML to industry. Just so that I can remain close to the field.
For all the call for alignment researchers, most org’s seem hesitant to do the obvious thing which would really expand their talent pool. Which is open up to remote work.
Obviously they struggle to manage and communicate remotely, which prevents them from accessing a larger and cheaper pool of global talent. However they could accelerate alignment by merely supplementing with remote contractors or learning to manage remote work.
For what it’s worth, I’ve updated somewhat against the viability of remote work here (mostly for contingent reasons—the less “shovel-ready” work is, the more of a penalty I think you end up paying for trying to do it remotely, due to communication overhead). See here for the latest update :)
The Fund for Alignment Research is a new organization to help AI safety researchers, primarily in academia, pursue high-impact research by hiring contractors. They’re a group of researchers affiliated with the Center for Human-Compatible AI at UC Berkeley and other labs like Jacob Steinhardt’s at UC Berkeley and David Krueger’s at Cambridge. They are hiring for:
Research Engineer (20–40 hours/week, remote or in Berkeley, $50–100/hour) – looking for 2–3 individuals with significant software engineering experience or experience applying machine learning methods.
Communications Specialist and Senior Communications Specialist (10–40 hours/week, remote or in Berkeley, $30–80/hour) – communicating high-impact AI safety research. This could be via technical writing/editing, graphics design, web design, presentation development, social media management, etc.
If you have any questions about the role, please contact them at hello@alignmentfund.org.
Appreciate the recommendation. Around April 1st I decided that the “work remotely for an alignment org” thing probably wouldn’t work out the way I wanted it to, and switched to investigating “on-site” options—I’ll write up a full post on that when I’ve either succeeded or failed on that score.
On a mostly unrelated note, every time I see an EA job posting that pays at best something like 40-50% of what qualified candidates would get in the industry, I feel that collide with the “we are not funding constrained” messaging. I understand that there are reasons why EA orgs may not want to advertise themselves as paying top-of-market, but nobody’s outright said that’s what’s going on, and there could be other less-visible bottlenecks that I haven’t observed yet.
For what it’s worth I was in a similar boat, I’ve long wanted to work on applied alignment, but also stay in Australia for family reasons. Each time I changed job I’ve made the same search as you, and ended up just getting a job where I can apply some ML to industry. Just so that I can remain close to the field.
For all the call for alignment researchers, most org’s seem hesitant to do the obvious thing which would really expand their talent pool. Which is open up to remote work.
Obviously they struggle to manage and communicate remotely, which prevents them from accessing a larger and cheaper pool of global talent. However they could accelerate alignment by merely supplementing with remote contractors or learning to manage remote work.
For what it’s worth, I’ve updated somewhat against the viability of remote work here (mostly for contingent reasons—the less “shovel-ready” work is, the more of a penalty I think you end up paying for trying to do it remotely, due to communication overhead). See here for the latest update :)