rachelAF mentioned that she had the impression their safety teams were more talent-constrained than funding-constrained. So I inferred that getting more value-aligned people onto those teams wouldn’t just alter the team composition, but increase the size of their safety teams.
We probably need more evidence that those teams do still have open headcount though. I know DeepMind’s does right now, but I’m not sure whether that’s just a temporary opening.
You make a good point though. If the safety teams have little influence within those orgs, then it #3 may be a lot more impactful than #1.
As far as I can tell, the safety teams of these two organisations are already almost entirely “value-aligned people in the AIS community”
Interesting, how do you know this? Is there information about these teams available somewhere?
rachelAF mentioned that she had the impression their safety teams were more talent-constrained than funding-constrained. So I inferred that getting more value-aligned people onto those teams wouldn’t just alter the team composition, but increase the size of their safety teams.
We probably need more evidence that those teams do still have open headcount though. I know DeepMind’s does right now, but I’m not sure whether that’s just a temporary opening.
You make a good point though. If the safety teams have little influence within those orgs, then it #3 may be a lot more impactful than #1.
Interesting, how do you know this? Is there information about these teams available somewhere?