I’m a greybeard engineer (30+ YOE) working in games. For many years now, I’ve wanted to transition to working in AGI as I’m one of those starry-eyed optimists that thinks we might survive the Singularity.
Well I should say I used to, and then I read AGI Ruin. Now I feel like if I want my kids to have a planet that’s not made of Computronium I should probably get involved. (Yes, I know the kids would be Computronium as well.)
So a couple practical questions:
What can I read/look at to skill up with “alignment.” What little I’ve read says it’s basically impossible, so what’s the state of the art? That “Death With Dignity” post says that nobody has even tried. I want to try.
What dark horse AI/Alignment-focused companies are out there and would be willing to hire an outsider engineer? I’m not making FAANG money (Games-industry peasant living in the EU), so that’s not the same barrier it would be if I was some Facebook E7 or something. (I’ve read the FAANG engineer’s post and have applied at Anthropic so far, although I consider that probably a hard sell).
Is there anything happening in OSS with alignment research?
I want to pitch in, and I’d prefer to be paid for doing it but I’d be willing to contribute in other ways.
What can I read/look at to skill up with “alignment.”
A good place to start is the “AGI Safety Fundamentals” course reading list, which includes materials from a diverse set of AI safety research agendas. Reading this can help you figure out who in this space is doing what, and which of that you think is useful. You can also join an official iteration of the course if you want to discuss the materials with a cohort and a facilitator (you can register interest for that here). You can also join the AI Alignment slack, to discuss these and other materials and meet others who are interested in working on AI safety.
What dark horse AI/Alignment-focused companies are out there and would be willing to hire an outsider engineer?
I’m not sure what qualifies as “dark horse”, but there are plenty of AI safety organizations interested in hiring research engineers and software engineers. For these roles, your engineering skills and safety motivation typically matter more than your experience in the community. Places off the top of my head that hire engineers for AI safety work: Redwood, Anthropic, FAR, OpenAI, DeepMind. I’m sure I’ve missed others, though, so look around! These sorts of opportunities are also usually posted on the 80k job board and in AI Alignment slack.
Thanks, that’s a super helpful reading list and a hell of a deep rabbit hole. Cheers.
I’m currently skilling up my rusty ML skills and will start looking in earnest in the next couple of months for new employment in this field. Thanks for the job board link as well.
Fair warning, this question is a bit redundant.
I’m a greybeard engineer (30+ YOE) working in games. For many years now, I’ve wanted to transition to working in AGI as I’m one of those starry-eyed optimists that thinks we might survive the Singularity.
Well I should say I used to, and then I read AGI Ruin. Now I feel like if I want my kids to have a planet that’s not made of Computronium I should probably get involved. (Yes, I know the kids would be Computronium as well.)
So a couple practical questions:
What can I read/look at to skill up with “alignment.” What little I’ve read says it’s basically impossible, so what’s the state of the art? That “Death With Dignity” post says that nobody has even tried. I want to try.
What dark horse AI/Alignment-focused companies are out there and would be willing to hire an outsider engineer? I’m not making FAANG money (Games-industry peasant living in the EU), so that’s not the same barrier it would be if I was some Facebook E7 or something. (I’ve read the FAANG engineer’s post and have applied at Anthropic so far, although I consider that probably a hard sell).
Is there anything happening in OSS with alignment research?
I want to pitch in, and I’d prefer to be paid for doing it but I’d be willing to contribute in other ways.
A good place to start is the “AGI Safety Fundamentals” course reading list, which includes materials from a diverse set of AI safety research agendas. Reading this can help you figure out who in this space is doing what, and which of that you think is useful. You can also join an official iteration of the course if you want to discuss the materials with a cohort and a facilitator (you can register interest for that here). You can also join the AI Alignment slack, to discuss these and other materials and meet others who are interested in working on AI safety.
I’m not sure what qualifies as “dark horse”, but there are plenty of AI safety organizations interested in hiring research engineers and software engineers. For these roles, your engineering skills and safety motivation typically matter more than your experience in the community. Places off the top of my head that hire engineers for AI safety work: Redwood, Anthropic, FAR, OpenAI, DeepMind. I’m sure I’ve missed others, though, so look around! These sorts of opportunities are also usually posted on the 80k job board and in AI Alignment slack.
Thanks, that’s a super helpful reading list and a hell of a deep rabbit hole. Cheers.
I’m currently skilling up my rusty ML skills and will start looking in earnest in the next couple of months for new employment in this field. Thanks for the job board link as well.
You can also apply to Redwood Research
( +1 for applying to Anthropic! )