This seems pretty implausible to me when compared to “work at a lab with an explicit safety team or focus” (e.g. DeepMind, OpenAI, Anthropic[1], Redwood, Conjecture). Researchers generally also don’t get formal power in companies of Google or Facebook scale, nor any real ability to shift the culture.
I’m surprised that “is joining Facebook or Google the best way to work on alignment” seems likely enough to even be worth asking, when you could just work on the problem directly in so many other places, including some with better R&D track records.
This seems pretty implausible to me when compared to “work at a lab with an explicit safety team or focus” (e.g. DeepMind, OpenAI, Anthropic[1], Redwood, Conjecture). Researchers generally also don’t get formal power in companies of Google or Facebook scale, nor any real ability to shift the culture.
I’m surprised that “is joining Facebook or Google the best way to work on alignment” seems likely enough to even be worth asking, when you could just work on the problem directly in so many other places, including some with better R&D track records.
I work at Anthropic, but my opinions are my own