I forecast that Anthropic would likely face a similar backlash from its employees than OpenAI in case Anthropic’s executives were to knowingly decrease the value of Anthropic shares significantly. E.g. if they were to switch from “scaling as fast as possible” to “safety-constrained scaling”. In that case, I would not find it surprising that a significant fraction of Anthropic’s staff threatened to leave or leave the company.
The reasoning is simple, given that we don’t observe significant differences in the wages of OpenAI and Anthropic employees and assuming that they are overall of the same distribution of skill and skill level. Then it seems that Anthropic is not able to use the argument of its AI safety focus as a bargaining argument to reduce the wages significantly. If true this would mean that safety is of relatively little importance to most of Anthropic’s employees.
Counter argument: Anthropic is hiring from a much more restricted pool of candidates. From only the safety-concerned candidates. In that case, Anthropic would have to pay a premium to hire these people. And it happens that this premium is roughly equivalent to the discount that these employees are willing to give to Anthropic because of its safety focus.
If you hire for a feature that helps people to be motivated for the job and that restricts your pool of candidates, I don’t think you need to pay a premium to hire those people.
To be hired by SpaceX you need to be passionate about SpaceX’s mission. In the real world that plays out in a way that those employees put up with bad working conditions because they believe in the mission.
Effects that should increase Anthropic’s salaries relative to OpenAI:
A) - The pool of AI safety focused candidates is smaller
B) - AI safety focused candidates are more motivated
Effects that should decrease Anthropic’s salaries relative to OpenAI:
C) - AI safety focused candidates should be willing to accept significantly lower wages
New notes:
(B) and (C) could cancel each other but that would be a bit suspicious. Still a partial cancellation would make a difference between OpenAI and Anthropic lower and harder to properly observe.
(B) May have a small effect, given that hires are already world level talents, it would be weird that they could significantly increase even more their performance by simply being more motivated. I.e. non AI safety focused candidates are also very motivated. The difference in motivation between both groups is possibly not large.
There’s a difference between motivated to goodhard performance metrics and sending loyalty signals and being motivated to do what’s good for the companies mission.
If we take OpenAI, there were likely people smart enough to know that stealing Scarlett Johansson’s voice was going to be bad for OpenAI. Sam Altman however wanted to do it in his vanity and opposing the project would have sent bad loyalty signals.
There’s a lot that software engineers do where the effects aren’t easy to measure, so being motivated to help the mission and not only reach performance metrics can be important.
Could Anthropic face an OpenAI drama 2.0?
I forecast that Anthropic would likely face a similar backlash from its employees than OpenAI in case Anthropic’s executives were to knowingly decrease the value of Anthropic shares significantly. E.g. if they were to switch from “scaling as fast as possible” to “safety-constrained scaling”. In that case, I would not find it surprising that a significant fraction of Anthropic’s staff threatened to leave or leave the company.
The reasoning is simple, given that we don’t observe significant differences in the wages of OpenAI and Anthropic employees and assuming that they are overall of the same distribution of skill and skill level. Then it seems that Anthropic is not able to use the argument of its AI safety focus as a bargaining argument to reduce the wages significantly. If true this would mean that safety is of relatively little importance to most of Anthropic’s employees.
Counter argument: Anthropic is hiring from a much more restricted pool of candidates. From only the safety-concerned candidates. In that case, Anthropic would have to pay a premium to hire these people. And it happens that this premium is roughly equivalent to the discount that these employees are willing to give to Anthropic because of its safety focus.
If you hire for a feature that helps people to be motivated for the job and that restricts your pool of candidates, I don’t think you need to pay a premium to hire those people.
To be hired by SpaceX you need to be passionate about SpaceX’s mission. In the real world that plays out in a way that those employees put up with bad working conditions because they believe in the mission.
Right 👍
So the effects are:
Effects that should increase Anthropic’s salaries relative to OpenAI: A) - The pool of AI safety focused candidates is smaller B) - AI safety focused candidates are more motivated
Effects that should decrease Anthropic’s salaries relative to OpenAI: C) - AI safety focused candidates should be willing to accept significantly lower wages
New notes: (B) and (C) could cancel each other but that would be a bit suspicious. Still a partial cancellation would make a difference between OpenAI and Anthropic lower and harder to properly observe. (B) May have a small effect, given that hires are already world level talents, it would be weird that they could significantly increase even more their performance by simply being more motivated. I.e. non AI safety focused candidates are also very motivated. The difference in motivation between both groups is possibly not large.
There’s a difference between motivated to goodhard performance metrics and sending loyalty signals and being motivated to do what’s good for the companies mission.
If we take OpenAI, there were likely people smart enough to know that stealing Scarlett Johansson’s voice was going to be bad for OpenAI. Sam Altman however wanted to do it in his vanity and opposing the project would have sent bad loyalty signals.
There’s a lot that software engineers do where the effects aren’t easy to measure, so being motivated to help the mission and not only reach performance metrics can be important.