AIS researcher at CHAI. London/Berkeley. Cats are not model-based reinforcement learners.
Rachel Freedman
What can I read/look at to skill up with “alignment.”
A good place to start is the “AGI Safety Fundamentals” course reading list, which includes materials from a diverse set of AI safety research agendas. Reading this can help you figure out who in this space is doing what, and which of that you think is useful. You can also join an official iteration of the course if you want to discuss the materials with a cohort and a facilitator (you can register interest for that here). You can also join the AI Alignment slack, to discuss these and other materials and meet others who are interested in working on AI safety.
What dark horse AI/Alignment-focused companies are out there and would be willing to hire an outsider engineer?
I’m not sure what qualifies as “dark horse”, but there are plenty of AI safety organizations interested in hiring research engineers and software engineers. For these roles, your engineering skills and safety motivation typically matter more than your experience in the community. Places off the top of my head that hire engineers for AI safety work: Redwood, Anthropic, FAR, OpenAI, DeepMind. I’m sure I’ve missed others, though, so look around! These sorts of opportunities are also usually posted on the 80k job board and in AI Alignment slack.
DeepMind and OpenAI both already employee teams of existential-risk focused AI safety researchers. While I don’t personally work on any of these teams, I get the impression from speaking to them that they are much more talent-constrained than resource-constrained.
I’m not sure how to alleviate this problem in the short term. My best guess would be free bootcamp-style training for value-aligned people who are promising researchers but lack specific relevant skills. For example, ML engineering training or formal mathematics education for junior AIS researchers who would plausibly be competitive hires if that part of their background were strengthened.
However, I don’t think that offering AI safety researchers as “free consultants” to these organizations would have much impact. I doubt the organizations would accept since they already have relevant internal teams, and AI safety researchers can presumably have greater impact working within the organization than as external consultants.
- Apr 14, 2022, 7:39 AM; 1 point) 's comment on What an actually pessimistic containment strategy looks like by (
Short answer: Yep, probably.
Medium answer: If AGI has components that look like our most capable modern deep learning models (which I think is quite likely if it arrives in the next decade or two), it will probably be very resource-intensive to run, and orders of magnitude more expensive to train. This is relevant because it impacts who has the resources to develop AGI (large companies and governments; likely not individual actors), secrecy (it’s more difficult to secretly acquire a massive amount of compute than it is to secretly boot up an AGI on your laptop; this may even enable monitoring and regulation), and development speed (if iterations are slower and more expensive, it slows down development).
If you’re interested in further discussion of possible compute costs for AGI (and how this affects timelines), I recommend reading about bio anchors.