If you hire for a feature that helps people to be motivated for the job and that restricts your pool of candidates, I don’t think you need to pay a premium to hire those people.
To be hired by SpaceX you need to be passionate about SpaceX’s mission. In the real world that plays out in a way that those employees put up with bad working conditions because they believe in the mission.
Effects that should increase Anthropic’s salaries relative to OpenAI:
A) - The pool of AI safety focused candidates is smaller
B) - AI safety focused candidates are more motivated
Effects that should decrease Anthropic’s salaries relative to OpenAI:
C) - AI safety focused candidates should be willing to accept significantly lower wages
New notes:
(B) and (C) could cancel each other but that would be a bit suspicious. Still a partial cancellation would make a difference between OpenAI and Anthropic lower and harder to properly observe.
(B) May have a small effect, given that hires are already world level talents, it would be weird that they could significantly increase even more their performance by simply being more motivated. I.e. non AI safety focused candidates are also very motivated. The difference in motivation between both groups is possibly not large.
There’s a difference between motivated to goodhard performance metrics and sending loyalty signals and being motivated to do what’s good for the companies mission.
If we take OpenAI, there were likely people smart enough to know that stealing Scarlett Johansson’s voice was going to be bad for OpenAI. Sam Altman however wanted to do it in his vanity and opposing the project would have sent bad loyalty signals.
There’s a lot that software engineers do where the effects aren’t easy to measure, so being motivated to help the mission and not only reach performance metrics can be important.
If you hire for a feature that helps people to be motivated for the job and that restricts your pool of candidates, I don’t think you need to pay a premium to hire those people.
To be hired by SpaceX you need to be passionate about SpaceX’s mission. In the real world that plays out in a way that those employees put up with bad working conditions because they believe in the mission.
Right 👍
So the effects are:
Effects that should increase Anthropic’s salaries relative to OpenAI: A) - The pool of AI safety focused candidates is smaller B) - AI safety focused candidates are more motivated
Effects that should decrease Anthropic’s salaries relative to OpenAI: C) - AI safety focused candidates should be willing to accept significantly lower wages
New notes: (B) and (C) could cancel each other but that would be a bit suspicious. Still a partial cancellation would make a difference between OpenAI and Anthropic lower and harder to properly observe. (B) May have a small effect, given that hires are already world level talents, it would be weird that they could significantly increase even more their performance by simply being more motivated. I.e. non AI safety focused candidates are also very motivated. The difference in motivation between both groups is possibly not large.
There’s a difference between motivated to goodhard performance metrics and sending loyalty signals and being motivated to do what’s good for the companies mission.
If we take OpenAI, there were likely people smart enough to know that stealing Scarlett Johansson’s voice was going to be bad for OpenAI. Sam Altman however wanted to do it in his vanity and opposing the project would have sent bad loyalty signals.
There’s a lot that software engineers do where the effects aren’t easy to measure, so being motivated to help the mission and not only reach performance metrics can be important.