It’s really good to see this said out loud. I don’t necessarily have a broad overview of the funding field, just my experiences of trying to get into it—both into established orgs, or trying to get funding for individual research, or for alignment-adjacent stuff—and ending up in a capabilities research company.
I wonder if this is simply the result of the generally bad SWE/CS market right now. People who would otherwise be in big tech/other AI stuff, will be more inclined to do something with alignment. Similarly, if there’s less money in overall tech (maybe outside of LLM-based scams), there may be less money for alignment.
I wonder if this is simply the result of the generally bad SWE/CS market right now. People who would otherwise be in big tech/other AI stuff, will be more inclined to do something with alignment.
This is roughly my situation. Waymo froze hiring and had layoffs while continuing to increase output expectations. As a result I/we had more work. I left in March to explore AI and landed on Mechanistic Interpretability research.
I have a similar story. I left my job at Amazon this year because there were layoffs there. Also, the release of GPT-4 in March made working on AI safety seem more urgent.
It’s really good to see this said out loud. I don’t necessarily have a broad overview of the funding field, just my experiences of trying to get into it—both into established orgs, or trying to get funding for individual research, or for alignment-adjacent stuff—and ending up in a capabilities research company.
I wonder if this is simply the result of the generally bad SWE/CS market right now. People who would otherwise be in big tech/other AI stuff, will be more inclined to do something with alignment. Similarly, if there’s less money in overall tech (maybe outside of LLM-based scams), there may be less money for alignment.
This is roughly my situation. Waymo froze hiring and had layoffs while continuing to increase output expectations. As a result I/we had more work. I left in March to explore AI and landed on Mechanistic Interpretability research.
I have a similar story. I left my job at Amazon this year because there were layoffs there. Also, the release of GPT-4 in March made working on AI safety seem more urgent.