Yay DeepMind safety humans for doing lots of (seemingly-)good safety work. I’m particularly happy with DeepMind’s approach to creating and sharing dangerous capability evals.
Yay DeepMind for growing the safety teams substantially:
We’ve also been growing since our last post: by 39% last year, and by 37% so far this year.
What’s the size of the AGI Alignment and Frontier Safety teams now?
It depends fairly significantly on how you draw the boundaries; I think anywhere between 30 and 50 is defensible. (For the growth numbers I chose one specific but arbitrary way of drawing the boundaries, I expect you’d get similar numbers using other methods of drawing the boundaries.) Note this does not include everyone working on safety, e.g. it doesn’t include the people working on present day safety or adversarial robustness.
Yay DeepMind safety humans for doing lots of (seemingly-)good safety work. I’m particularly happy with DeepMind’s approach to creating and sharing dangerous capability evals.
Yay DeepMind for growing the safety teams substantially:
What’s the size of the AGI Alignment and Frontier Safety teams now?
It depends fairly significantly on how you draw the boundaries; I think anywhere between 30 and 50 is defensible. (For the growth numbers I chose one specific but arbitrary way of drawing the boundaries, I expect you’d get similar numbers using other methods of drawing the boundaries.) Note this does not include everyone working on safety, e.g. it doesn’t include the people working on present day safety or adversarial robustness.