That’s a claim but is something meritocratic if it doesn’t test merit at all?
Leetcode has been goodharted long past the point where what it is selecting for has any signal.
See back when the questions were more reasonable and practice sites didn’t exist, you could be measuring “can someone invent an algorithm to this novel problem and code it up in an hour”. That’s real talent.
The question difficulty is such even for a “medium” that this is impossible. The actual original solution made by a real genius of CS at some elite university decades ago took them days to months to invent.
Anyone solving it in an hour, much less the 20 minute new standard is cheating through pattern recognition or direct knowledge of the solution.
If they tested young lawyers on ace attorney speedruns would you consider this meritocratic?
I’ve done a LOT of interviewing for all levels of software engineering, at big and small companies, and it’s simply wrong to say it’s all leetcode-style ranking. Short-duration coding challenges are a big part of most interview processes, but it’s not graded the same way as competitions are. (at the better employers, at least) It’s not about the right answer, it’s about the explanation, follow-up questions, and understanding of the algorithm and code. And perhaps a bit about the right answer and the fluency of coding, but that’s more pass/fail than requiring tons of practice. I routinely give hints to get the candidate on the right path to remember/figure out a working solution.
It’s disturbing just how many applicants can get to an interview without having done minimal practice on those sites (or coding regularly in their previous job). It’s a necessary part of the interview, just to weed out the non-starters. For more senior roles, the design discussions and tech dives into resume topics (“I see you’ve worked with distributed caching—tell me how you managed varying invalidation needs.”) are more useful for final decisions.
More importantly, interview and hiring is a pretty small part of the career impact of a developer. In-role impact is also never 100% meritocratic, but at a lot of places is pretty good.
I have no clue if young attorneys are tested by actually knowing what cites to start with or how to prepare for a relevant case, but I kind of hope they are.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
I would agree with this and upthread said basically the same thing. It’s a goodharted metric. Sure, if something can do it at all that’s one signal, but deciding between 10 candidates based on going to “2 mediums, 40 minutes” where only 1-2 will pass is essentially arbitrary.
The 1-2 who passed may have been better at LC than the failures, or may have been lucky this round.
Say 6 candidates finished at least 1 medium and were somewhere dealing with bugs on the second. Your test doesn’t realistically distinguish between that set of 6.
Leetcode has been goodharted long past the point where what it is selecting for has any signal.
Yes, Leetcoding is essentially unrelated to actual software engineering, but isn’t one’s Leetcoding ability also an indirect test of intelligence + conscientiousness? I doubt that it’s the entire industry standard only because it’s fashionable.
Intelligence—there may be a small amount of correlation
conscientiousness—no. LC is competitive programming, a sport where you memorize from a finite amount of patterns and practice sloppy methods of coding to minimize typing.
Someone did a survey of users’ IQs and CodeForce scores and the results showed that IQ might have a slight positive correlation with competitive programming ability.
So consistent with “a small amount of correlation”. The biggest correlation is competitive programming practice.
So you now have to maintain 2 skillsets—the one for the job, and a second one to remain employable.
And then each interview spend many hours “battling” for each offer, since even if you are an expert you can get unlucky or get ghosted after passing for unclear reasons that may be illegal discrimination you can’t prove.
It’s dumb.
But yeah, maybe not as bad as the lawyer interview process, where it’s basically the reputation of your school and word of mouth and how good you are at golf or something.
That’s a claim but is something meritocratic if it doesn’t test merit at all?
Leetcode has been goodharted long past the point where what it is selecting for has any signal.
See back when the questions were more reasonable and practice sites didn’t exist, you could be measuring “can someone invent an algorithm to this novel problem and code it up in an hour”. That’s real talent.
The question difficulty is such even for a “medium” that this is impossible. The actual original solution made by a real genius of CS at some elite university decades ago took them days to months to invent.
Anyone solving it in an hour, much less the 20 minute new standard is cheating through pattern recognition or direct knowledge of the solution.
If they tested young lawyers on ace attorney speedruns would you consider this meritocratic?
I’ve done a LOT of interviewing for all levels of software engineering, at big and small companies, and it’s simply wrong to say it’s all leetcode-style ranking. Short-duration coding challenges are a big part of most interview processes, but it’s not graded the same way as competitions are. (at the better employers, at least) It’s not about the right answer, it’s about the explanation, follow-up questions, and understanding of the algorithm and code. And perhaps a bit about the right answer and the fluency of coding, but that’s more pass/fail than requiring tons of practice. I routinely give hints to get the candidate on the right path to remember/figure out a working solution.
It’s disturbing just how many applicants can get to an interview without having done minimal practice on those sites (or coding regularly in their previous job). It’s a necessary part of the interview, just to weed out the non-starters. For more senior roles, the design discussions and tech dives into resume topics (“I see you’ve worked with distributed caching—tell me how you managed varying invalidation needs.”) are more useful for final decisions.
More importantly, interview and hiring is a pretty small part of the career impact of a developer. In-role impact is also never 100% meritocratic, but at a lot of places is pretty good.
I have no clue if young attorneys are tested by actually knowing what cites to start with or how to prepare for a relevant case, but I kind of hope they are.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
I would agree with this and upthread said basically the same thing. It’s a goodharted metric. Sure, if something can do it at all that’s one signal, but deciding between 10 candidates based on going to “2 mediums, 40 minutes” where only 1-2 will pass is essentially arbitrary.
The 1-2 who passed may have been better at LC than the failures, or may have been lucky this round.
Say 6 candidates finished at least 1 medium and were somewhere dealing with bugs on the second. Your test doesn’t realistically distinguish between that set of 6.
Yes, Leetcoding is essentially unrelated to actual software engineering, but isn’t one’s Leetcoding ability also an indirect test of intelligence + conscientiousness? I doubt that it’s the entire industry standard only because it’s fashionable.
Intelligence—there may be a small amount of correlation conscientiousness—no. LC is competitive programming, a sport where you memorize from a finite amount of patterns and practice sloppy methods of coding to minimize typing.
Someone did a survey of users’ IQs and CodeForce scores and the results showed that IQ might have a slight positive correlation with competitive programming ability.
So consistent with “a small amount of correlation”. The biggest correlation is competitive programming practice.
So you now have to maintain 2 skillsets—the one for the job, and a second one to remain employable.
And then each interview spend many hours “battling” for each offer, since even if you are an expert you can get unlucky or get ghosted after passing for unclear reasons that may be illegal discrimination you can’t prove.
It’s dumb.
But yeah, maybe not as bad as the lawyer interview process, where it’s basically the reputation of your school and word of mouth and how good you are at golf or something.