I’m an ML engineer at a FAANG-adjacent company. Big enough to train our own sub-1B parameter language models fairly regularly. I work on training some of these models and finding applications of them in our stack. I’ve seen the light after I read most of Superintelligence. I feel like I’d like to help out somehow. I’m in my late 30s with kids, and live in the SF bay area. I kinda have to provide for them, and don’t have any family money or resources to lean on, and would rather not restart my career. I also don’t think I should abandon ML and try to do distributed systems or something. I’m a former applied mathematician, with a phd, so ML was a natural fit. I like to think I have a decent grasp on epistemics, but haven’t gone through the sequences. What should someone like me do? Some ideas: (a) Keep doing what I’m doing, staying up to date but at least not at the forefront; (b) make time to read more material here and post randomly; (c) maybe try to apply to Redwood or Anthropic… though dunno if they offer equity (doesn’t hurt to find out though) (d) try to deep dive on some alignment sequence on here.
Both 80,000hours and AI Safety Support are keen to offer personalised advice to people facing a career decision and interested in working on alignment (and in 80k’s case, also many other problems).
Noting a conflict of interest—I work for 80,000 hours and know of but haven’t used AISS. This post is in a personal capacity, I’m just flagging publicly available information rather than giving an insider take.
You might want to consider registering for the AGI Safety Fundamentals Course (or reading through the content). The final project provides a potential way of dipping your toes into the water.
Applying to Redwood or Anthropic seems like a great idea. My understanding is that they’re both looking for aligned engineers and scientists and are both very aligned orgs. The worst case seems like they (1) say no or (2) don’t make an offer that’s enough for you to keep your lifestyle (whatever that means for you). In either case you haven’t lost much by applying, and you definitely don’t have to take a job that puts you in a precarious place financially.
Pragmatic AI safety (link: pragmaticaisafety.com) is supposed to be a good sequence for helping you figure out what to do. My best advice is to talk to some people here who are smarter than me and make sure you understand the real problems, because the most common outcome besides reading a lot and doing nothing is to do something that feels like work but isn’t actually working on anything important.
Work your way up the ML business hierarchy to the point where you are having conversations with decision makers. Try to convince them that unaligned AI is a significant existential risk. A small chance of you doing this will in expected value terms more than make up for any harm you cause by working in ML given that if you left the field someone else would take your job.
One of the paths which has non-zero hope in my mind is building a weakly aligned non-self improving research assistant for alignment researchers. Ought and EleutherAI’s #accelerating-alignment are the two places I know who are working in this direction fairly directly, though the various language model alignment orgs might also contribute usefully to the project.
I’m an ML engineer at a FAANG-adjacent company. Big enough to train our own sub-1B parameter language models fairly regularly. I work on training some of these models and finding applications of them in our stack. I’ve seen the light after I read most of Superintelligence. I feel like I’d like to help out somehow. I’m in my late 30s with kids, and live in the SF bay area. I kinda have to provide for them, and don’t have any family money or resources to lean on, and would rather not restart my career. I also don’t think I should abandon ML and try to do distributed systems or something. I’m a former applied mathematician, with a phd, so ML was a natural fit. I like to think I have a decent grasp on epistemics, but haven’t gone through the sequences. What should someone like me do? Some ideas: (a) Keep doing what I’m doing, staying up to date but at least not at the forefront; (b) make time to read more material here and post randomly; (c) maybe try to apply to Redwood or Anthropic… though dunno if they offer equity (doesn’t hurt to find out though) (d) try to deep dive on some alignment sequence on here.
Both 80,000hours and AI Safety Support are keen to offer personalised advice to people facing a career decision and interested in working on alignment (and in 80k’s case, also many other problems).
Noting a conflict of interest—I work for 80,000 hours and know of but haven’t used AISS. This post is in a personal capacity, I’m just flagging publicly available information rather than giving an insider take.
You might want to consider registering for the AGI Safety Fundamentals Course (or reading through the content). The final project provides a potential way of dipping your toes into the water.
Applying to Redwood or Anthropic seems like a great idea. My understanding is that they’re both looking for aligned engineers and scientists and are both very aligned orgs. The worst case seems like they (1) say no or (2) don’t make an offer that’s enough for you to keep your lifestyle (whatever that means for you). In either case you haven’t lost much by applying, and you definitely don’t have to take a job that puts you in a precarious place financially.
Pragmatic AI safety (link: pragmaticaisafety.com) is supposed to be a good sequence for helping you figure out what to do. My best advice is to talk to some people here who are smarter than me and make sure you understand the real problems, because the most common outcome besides reading a lot and doing nothing is to do something that feels like work but isn’t actually working on anything important.
Work your way up the ML business hierarchy to the point where you are having conversations with decision makers. Try to convince them that unaligned AI is a significant existential risk. A small chance of you doing this will in expected value terms more than make up for any harm you cause by working in ML given that if you left the field someone else would take your job.
Given where you live, I recomend going to some local LW events. There are still LW meetups in the Bay area, right?
You should apply to Anthropic. If you’re writing ML software at semi-FAANG. they probably want to interview you ASAP. https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers
The compensation is definitely enough to take care of your family and then save some money!
One of the paths which has non-zero hope in my mind is building a weakly aligned non-self improving research assistant for alignment researchers. Ought and EleutherAI’s #accelerating-alignment are the two places I know who are working in this direction fairly directly, though the various language model alignment orgs might also contribute usefully to the project.
Anthropic offer equity, they can give you more details in private.
I recommend applying to both (it’s a cheap move with a lot of potential upside), let me know if you’d like help connecting to any of them.
If you learn by yourself—I’d totally get one on one advise (others linked), people will make sure you’re on the best path possible