On the concept of “talent-constrained” organizations
Some people have claimed that organizations are often talent-constrained. In other words, they’re not short on money, but they’re not able to find talented people. Specifically, some people, such as biomedical researcher John Todd, have claimed they’d turn down large amounts of additional money in order to be able to hire superstars. Others have claimed that the effective altruism movement is talent-constrained.
I’ll use talent-constrained in the following sense: an organization is talent-constrained if it’s willing to turn down a substantial amount of additional money in order to be able to hire a superstar, and that additional money it’s willing to turn down is enough to hire several people at the current salaries they offer.
My first reaction to claims of talent constraint is: why don’t these organizations bid up the price of talent to the levels that they claim they’re willing to forgo to hire that talent? There could be many possible answers. I’ll explore the most salient here.
1. Talent constraint because of cash constraint
Some organizations are cash-constrained, so the ways they would use additional cash at the margin differs significantly from the ways they can reallocate existing cash. So, the fact that they’d be willing to forgo huge amounts of additional money in order to hire new talent doesn’t necessarily mean that they can reallocate existing money to bid for superstar talent.
While I agree that this is a common situation, particularly for small organizations, I don’t think that talent-constrained is the right description of this situation.
2. Genuine absence of talented people
In some cases, the talented people the organization needs are genuinely very rare. So it may so happen that the organization simply hasn’t been approached by any person who’d be impressive enough to hire at a high wage. However, I don’t find this explanation very convincing.
People decide whether or not to approach organizations based partly on publicly available information about how much those organizations pay. If the organizations in question don’t pay most of their workers high salaries (presumably because these workers aren’t superstars) then the superstars who are considering whether to apply to those organizations may believe they’re not going to be paid high salaries, hence they may not bother to apply. If organizations care enough about hiring superstars, they need to proactively indicate in their hiring advertisements that they are willing to pay large amounts for superstars.
If there truly exist no talented people who fit the description the organization needs, then the real problem is that the organization is simply engaging in wishful thinking. Calling it “talent-constrained” is misleading because it’s bemoaning the absence of an option that is impossible to have anyway. (The concept of talent constraint may still make sense at a broader societal level; perhaps more people need to train in relevant fields when they are younger, or perhaps licensing restrictions or migration restrictions are preventing the hiring of talented people).
3. Talented people would or should be willing to work for low pay
The claim here is that one of the characteristics that defines genuinely talented people is a strong intrinsic motivation to work very hard. Those who are willing to work only in exchange for stellar pay are unlikely to be good cultural fits for the job, and are unlikely to be retained in the field.
This type of explanation may make sense in some cases. For instance, it arguably works in principle for effective altruism organizations: they want to hire people who are genuinely passionate about effective altruism. Demanding a high salary as a precondition of being employed is a negative signal and suggests one is more interested in personal gain than in altruism.
4. Workplace egalitarianism and morale
Significant disparities in the amounts of money that different people in an organization are paid can be bad for morale. Therefore, even if there are a few highly talented people whose marginal contribution commands high salaries, paying them more would either create workplace fiction due to income disparities, or force employers to raise everybody’s salaries to a higher level. Neither of these may pass a cost-benefit analysis.
5. Irrationality of funders
The most uncharitable explanation is that employers and their funders are simply irrational. In this view, they have an intrinsic aversion to paying people large amounts of money, and this aversion doesn’t stand up to rational scrutiny. The aversion may be displayed by people running the organization, or the people funding them (which may be a larger institution with which the organization is affiliated, or rich individual and foundation donors, or a large number of small donors). For instance, an effective altruism organization that paid a salary of $300,000 to its CEO might lose the support of donors who are repelled by the huge amounts of money made. Research labs at universities may be constrained by the payscales used by the universities. They may also be bureaucratically constrained with respect to reallocating funds from equipment to salaries in order to quickly scoop up a star researcher.
Of the explanations offered, which do you think carries the most weight for specific organizations that you know claim to be talent-constrained? Are there other explanations that I missed? What do you think of my critiques and discussion of specific explanations?
Thanks to Jonah Sinick and Ben Todd for comments that inspired this post (I didn’t run the actual post by them).
- Thoughts on doing good through non-standard EA career pathways by 30 Dec 2019 2:06 UTC; 171 points) (EA Forum;
- 20 May 2015 13:43 UTC; 4 points) 's comment on How important is marginal earning to give? by (EA Forum;
- Biomedical research, superstars, and innovation by 14 Mar 2014 22:38 UTC; 2 points) (
- One systemic failure in particular by 28 May 2020 5:25 UTC; -3 points) (
Another suggestion (a variant on #2 which explains why the shortage is worse than one would expect given how many smart people there are out there): there is a shortage of reliably diagnosable talented people. Hiring requires multiple factors of which ‘having talent’ is only one, you must also be able to signal in some way your overall appropriateness and safety.
One of the impressions I get from descriptions of the interview process at Google & Facebook (and previously, Microsoft) is that they were more worried about hiring a flawed candidate than rejecting a talented candidate. The reasoning being that these places are ‘o-ring production’ sorts of places, where a single person could wreak a lot of havoc, by either commission or omission; so despite their shortage of talent, they’re forced to be paranoid in their hiring process and biased towards rejection.
Mere money doesn’t solve their problem: they can offer tons of money towards random candidates, but not to the ones which are visibly/reliably talented (which are a small subset of the talented).
A way around that might be to make it known that big salaries are available, but not up front, only by proven merit after being given a job. Does this already happen?
This actually seems very common in office jobs where you find many workers with million dollar salaries. Wall Street firms, strategy consultancies, and law firms all use models in which salaries expand massively with time, with high attrition along the way: the “up-or-out” model.
Even academia gives tenured positions (which have enormous value to workers) only after trial periods as postdocs and assistant professors.
Main Street corporate executives have to climb the ranks.
I think the financial industry is like this. Sure, the starting salaries are decent, but it’s nothing compared to what you get if you make partner.
Sounds like a startup! :)
Yes. And you might also worry that high pay gives impostor candidates a very strong incentive to try to masquerade as talented—that your applicant pool might be worse if you offer 300k per year than 200k per year.
4, 5, and 2 in that order. You might think you could bypass 2 by advertising a high enough salary, but keep in mind that just advertising a high salary being available gives you problems 4 and 5 immediately, and if you don’t advertise a superstar salary and don’t have a reputation for paying it, then you may not be approached by any talent who’s both money-desiring enough, and strong enough as a talent, to force you to confront the question of whether you need to actually take on disadvantages 4 and 5 for that particular person.
This reply is based on experience.
Based just on my experience at MIRI, I’ll add another vote to “4, 5, and 2 in that order,” especially if #5 includes funders and if #2 includes gwern’s “shortage of reliably diagnosable talented people.”
Item #4 is a pretty big deal in practice. ’Nuff said.
I’ve exhibited #5 throughout my tenure as CEO at MIRI, and perhaps still do. I’ve been repeatedly resistant to higher salaries and in retrospect I think the Board was right in two cases to be less timid than I was. Now the big worry is funders: the EA movement, in particular, may prefer martyr-ish salaries, though on that point I’m relieved to see that GiveWell’s founders still make substantially more than I do.
On #2, consider MIRI’s hiring of myself and Nate Soares. Neither of us are “superstars” — at least not yet; we’ll try! — but we are clearly good for MIRI at the present stage, and yet I came in with no executive experience and no relevant technical background, and Nate came in with no research publications, having learned logic and model theory a few months before his hiring. There are probably other good hires out there available to MIRI but I just don’t know what they look like. And of course in general, the world is not training FAI talent the way it trains, say, programming talent or finance talent. So in MIRI’s case there is a pretty unusual “genuine absence of talented people.”
In the context of math talent (as opposed to philosophical/reductionist/naturalist*) at MIRI?
*I’m interested in whether the talent in question was something that is already understood by academia (in the sense that being really good at math in particular ways is well understood to be a quality that is desireable by the community, but the specific type of reductionist philosophical talent that you would be looking for isn’t seen that way by academic philosophy in general.)
Another possibility is that money doesn’t move people at the superstar level. They may simply not wish to work at the organization even for extremely large amounts of money.
Feynman turned down an offer from another university offering him a huge salary increase on the grounds that having a lot of money would end up ruining his life.
Thanks for the exploration of the issue.
I spent some time thinking about this question a while ago. My general conclusion was that some version of your factor (4) is doing a lot of the work; I then investigate how this leads to a meaningful distinction between funding constraint and talent constraint. I’ve just shared my notes here (they were framed for CEA, but should be generally applicable).
The general argument behind expecting (4) to be a big factor doesn’t rely on ‘fairness’ or ‘morale’. It can arise even for totally self-interested rational agents. It goes something like this:
Employing someone is a trade. There is a maximum salary you’d pay for their labour, and a minimum they’d accept. You end up paying them something in the middle, and the trade surplus is split between you.
Individuals are better able to hide their preferences than larger organisations. If the organisation is known to paying $X to person Y for their labour, similarly qualified people are likely to ask for salaries closer to $X in the knowledge that the organisation is happy to pay this rate.
So paying high salaries to some people shifts the balance of power in salary negotiations in favour of other employees. The employer will capture less of the trade surplus of employment in those cases.
Thanks for the linked write-up. I think that provides a good theoretical framework for the issue. And maybe you can do a LessWrong post based off of your writeup—that should get more attention and I’m eager to see what others think of your framing.
Let’s get a little more specific here.
Can anyone here name one currently living individual that MIRI would like to hire away from their current position to work on Friendly AI research, if money were no object? Terence Tao, perhaps? Do you think he would leave his current position as a university professor if you could offer him, say, a ten million dollar annual salary?
To do research, someone’s got to have some actual interest in the problem space, or they’ll end up fiddling around and doing stuff that’s good for their interests or their long-term career but not necessarily for what their employer wants. So I don’t know who has the capacity to acquire that interest. Tao would be good if he acquired an interest in the subject but I don’t know if he could. Gowers at least commented on Baez’s summary of the earlier Christiano result, but a short G+ comment isn’t that much evidence. I don’t currently know of any math superstars who want to work on FAI theory but only for a high salary — if I did, and I thought it would be a good hire, I’d reach out to MIRI’s donors and try to solicit targeted donations for the hire.
Vladimir Voevodsky is a math superstar who plausibly could acquire such an interest.
Here is a summary of a recent talk he gave. After winning the Fields medal in 2002 for his work on motivic cohomology, he felt he was out of big ideas in that field. So he “decided to be rational in choosing what to do” and asked himself “What would be the most important thing I could do for math at this period of development and such that I could use my skills and resources to be helpful?” His first idea was to establish more connections between pure and applied mathematics. He worked on that for two years, and “totally failed.” His second idea was to develop tools/software for mathematicians to help mathematicians check their proofs. There had already been lots of work on this subject, and several different software systems for this purpose already existed. So he looked at the already existing software. He found that either he could understand them, and see that they weren’t what he wanted, or that they just didn’t make any sense to him. “There was something obviously missing in the understanding of those.” So he took a course at Princeton University on programming languages using the proof assistant Coq. Halfway through the course, he suddenly realized that Martin-Lof types could essentially be interpreted as homotopy types. This lead to a community of mathematicians who developed Homotopy Type Theory/Univalent Foundations with him, which is a completely new and self-contained foundation of mathematics.
Andrej Bauer, one of the Homotopy Type theorists, has said “We’ve already learned the lesson that we don’t know how to program computers so they will have original mathematical ideas, maybe some day it will happen, but right now we know how to cooperate with computers. My expectation is that all these separate, limited AI success, like driving a car and playing chess, will eventually converge back, and then we’re going to get computers that are really very powerful.” Plausibly, Voevodsky himself also has some interest in AI.
So here is a mathematician with:
a solid track record of solving very difficult problems, and coming up with creative new insights.
good efforts to make rational decisions in what sort of mathematics he does, yielding an interest and willingness in completely switching fields if he thinks he can do more important things there.
an ability to solve practical problems using very abstract mathematics.
I think it would be worth trying to get him interested in FAI problems.
Can you provide a list of problems that you would want them to work on?
Heck, is there anything we can try asking about on MathOverflow? How about the tiling problem?
I think there is a lot to #5. This is, as you hint at, connected with egalitarianism. There is a natural tendency to want to pay people who do the same job roughly the same and only under strong market pressure will the income of the talented person match the value he provides to the employer. In the academia, where I work, it is blatantly obvious that some people contribute vastly more than others but still they are paid roughly as much. Only in the more competitive American system are the wage spreads starting to reflect differences in output. No doubt is this a development that will continue.
#5 + hypocrisy. The employers may be saying “we are offering huge money and the talented people still aren’t coming” when their offer actually may not seem like “huge money” to the people who have the necessary skills.
Some changes in life are not reversible. Imagine that you are a talented person and you already have a decent job and make decent money. Would you change it for another job just because it offers you 10% more? I probably wouldn’t, because you never know, the new job may actually suck, and returning to the old place may no longer be an option. But for twice the salary, I would take the risk. But the employer might think that’s too much, and that their +10% option already is very generous. (In such case maybe the solution would be to offer 10% more, plus a one-time huge extra bonus for staying in the new job for 6 months.)
Another possibility, a bit similar to #3, but not exactly the same—the talented people may be motivated by other things than money; and maybe they already have all the money they need (assuming they aren’t effective altruists). They now optimize for other things. To attract them, you would have to offer some of those other things, e.g. shorter working hours, more freedom, etc.
In other words, the labour market resembles an oligopsony much more than one would guess by looking at the total number of employers alone.
I think it’s a point about risk aversion, not about the structure of the labour market.
I think there are multiple causes.
People are risk-averse. But even if they weren’t, changes usually have transaction costs. For example if people have to move from one city to another, that costs something. Also it means that their partner could have to change their job to stay together.
If you want to make a lot of money, you have to specialize in something. That naturally reduces the number of potential employers. You can switch do doing something different, but again, there are transaction costs.
All true. And, again, all of that doesn’t have much to do with whether the labour market is an oligopsony.
Seems to me that with enough specialization there are few buyers and few sellers. Which of these numbers is smaller probably depends on specific specialization, and may change over time.
I hate to use a group selection argument, but maybe people think that trying to poach top talent by offering higher salaries will just get them into a bidding war that would be bad for everyone but the people being bid on?
This isn’t really a group selection argument, and it’s definitely something that happens.
It may also just be difficult to find talented people, especially for new or abnormal fields like effective altruism.