It’s important to be careful about the boundaries of “the same sort of safety work.” For example, my understanding is that “Alignment faking in large language models” started as a Redwood Research project, and Anthropic only became involved later. Maybe Anthropic would have done similar work soon anyway if Redwood didn’t start this project. But, then again, maybe not. By working on things that labs might be interested in you can potentially get them to prioritize things that are in scope for them in principle but which they might nevertheless neglect.
It’s important to be careful about the boundaries of “the same sort of safety work.” For example, my understanding is that “Alignment faking in large language models” started as a Redwood Research project, and Anthropic only became involved later. Maybe Anthropic would have done similar work soon anyway if Redwood didn’t start this project. But, then again, maybe not. By working on things that labs might be interested in you can potentially get them to prioritize things that are in scope for them in principle but which they might nevertheless neglect.