“able to generate code I expect income and jobs to shift away from people with no credentials and skills to people with lots of credentials and political acumen and no skills”
What do you mean by this? What do you define as skills? In swe ing right now, “talent” means “ability to leetcode” which we already know is about to vanish as a relevant metric once someone fine tunes gpt-4 generation models on programming.
I am kinda imagining that age discrimination will matter less in coding because that 55 year old software architect with lots of credentials doesn’t need anyone else for whole applications. They design the high level architecture, with more experience than anyone younger, and these architecture text files get made into code, where the design stacks the probabilities of failure in such a way that mistakes made by the AI are usually automatically found and fixed.
(This is by unit testable module design, and a different session of the same or different AI models is writing the unit tests, where you must have a double failure of a fault in the code and the unit test letting it through in order for a bug to make it to production)
So in a way this “all star” who is slower than a younger person at actually writing code, and is not up to date on the latest languages and apis, is more skilled than anyone else where it counts. But they have a good design and half their workday they spend in meetings. The AIs use whatever apis they feel like which are often the latest.
Whatever you think of current hiring practices at large companies, I think it is pretty fair to say that they are much more meritocratic than hiring practices for law positions at large law companies, for example. It is literally impossible in most industries for young highly skilled professionals to shoot to the top of the payscale based on past or tested performance. My sense is that the default for white collar positions is more like a 0.1 correlation with ability versus a 0.3 in computers.
That’s a claim but is something meritocratic if it doesn’t test merit at all?
Leetcode has been goodharted long past the point where what it is selecting for has any signal.
See back when the questions were more reasonable and practice sites didn’t exist, you could be measuring “can someone invent an algorithm to this novel problem and code it up in an hour”. That’s real talent.
The question difficulty is such even for a “medium” that this is impossible. The actual original solution made by a real genius of CS at some elite university decades ago took them days to months to invent.
Anyone solving it in an hour, much less the 20 minute new standard is cheating through pattern recognition or direct knowledge of the solution.
If they tested young lawyers on ace attorney speedruns would you consider this meritocratic?
I’ve done a LOT of interviewing for all levels of software engineering, at big and small companies, and it’s simply wrong to say it’s all leetcode-style ranking. Short-duration coding challenges are a big part of most interview processes, but it’s not graded the same way as competitions are. (at the better employers, at least) It’s not about the right answer, it’s about the explanation, follow-up questions, and understanding of the algorithm and code. And perhaps a bit about the right answer and the fluency of coding, but that’s more pass/fail than requiring tons of practice. I routinely give hints to get the candidate on the right path to remember/figure out a working solution.
It’s disturbing just how many applicants can get to an interview without having done minimal practice on those sites (or coding regularly in their previous job). It’s a necessary part of the interview, just to weed out the non-starters. For more senior roles, the design discussions and tech dives into resume topics (“I see you’ve worked with distributed caching—tell me how you managed varying invalidation needs.”) are more useful for final decisions.
More importantly, interview and hiring is a pretty small part of the career impact of a developer. In-role impact is also never 100% meritocratic, but at a lot of places is pretty good.
I have no clue if young attorneys are tested by actually knowing what cites to start with or how to prepare for a relevant case, but I kind of hope they are.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
I would agree with this and upthread said basically the same thing. It’s a goodharted metric. Sure, if something can do it at all that’s one signal, but deciding between 10 candidates based on going to “2 mediums, 40 minutes” where only 1-2 will pass is essentially arbitrary.
The 1-2 who passed may have been better at LC than the failures, or may have been lucky this round.
Say 6 candidates finished at least 1 medium and were somewhere dealing with bugs on the second. Your test doesn’t realistically distinguish between that set of 6.
Leetcode has been goodharted long past the point where what it is selecting for has any signal.
Yes, Leetcoding is essentially unrelated to actual software engineering, but isn’t one’s Leetcoding ability also an indirect test of intelligence + conscientiousness? I doubt that it’s the entire industry standard only because it’s fashionable.
Intelligence—there may be a small amount of correlation
conscientiousness—no. LC is competitive programming, a sport where you memorize from a finite amount of patterns and practice sloppy methods of coding to minimize typing.
Someone did a survey of users’ IQs and CodeForce scores and the results showed that IQ might have a slight positive correlation with competitive programming ability.
So consistent with “a small amount of correlation”. The biggest correlation is competitive programming practice.
So you now have to maintain 2 skillsets—the one for the job, and a second one to remain employable.
And then each interview spend many hours “battling” for each offer, since even if you are an expert you can get unlucky or get ghosted after passing for unclear reasons that may be illegal discrimination you can’t prove.
It’s dumb.
But yeah, maybe not as bad as the lawyer interview process, where it’s basically the reputation of your school and word of mouth and how good you are at golf or something.
“able to generate code I expect income and jobs to shift away from people with no credentials and skills to people with lots of credentials and political acumen and no skills”
What do you mean by this? What do you define as skills? In swe ing right now, “talent” means “ability to leetcode” which we already know is about to vanish as a relevant metric once someone fine tunes gpt-4 generation models on programming.
I am kinda imagining that age discrimination will matter less in coding because that 55 year old software architect with lots of credentials doesn’t need anyone else for whole applications. They design the high level architecture, with more experience than anyone younger, and these architecture text files get made into code, where the design stacks the probabilities of failure in such a way that mistakes made by the AI are usually automatically found and fixed.
(This is by unit testable module design, and a different session of the same or different AI models is writing the unit tests, where you must have a double failure of a fault in the code and the unit test letting it through in order for a bug to make it to production)
So in a way this “all star” who is slower than a younger person at actually writing code, and is not up to date on the latest languages and apis, is more skilled than anyone else where it counts. But they have a good design and half their workday they spend in meetings. The AIs use whatever apis they feel like which are often the latest.
Whatever you think of current hiring practices at large companies, I think it is pretty fair to say that they are much more meritocratic than hiring practices for law positions at large law companies, for example. It is literally impossible in most industries for young highly skilled professionals to shoot to the top of the payscale based on past or tested performance. My sense is that the default for white collar positions is more like a 0.1 correlation with ability versus a 0.3 in computers.
That’s a claim but is something meritocratic if it doesn’t test merit at all?
Leetcode has been goodharted long past the point where what it is selecting for has any signal.
See back when the questions were more reasonable and practice sites didn’t exist, you could be measuring “can someone invent an algorithm to this novel problem and code it up in an hour”. That’s real talent.
The question difficulty is such even for a “medium” that this is impossible. The actual original solution made by a real genius of CS at some elite university decades ago took them days to months to invent.
Anyone solving it in an hour, much less the 20 minute new standard is cheating through pattern recognition or direct knowledge of the solution.
If they tested young lawyers on ace attorney speedruns would you consider this meritocratic?
I’ve done a LOT of interviewing for all levels of software engineering, at big and small companies, and it’s simply wrong to say it’s all leetcode-style ranking. Short-duration coding challenges are a big part of most interview processes, but it’s not graded the same way as competitions are. (at the better employers, at least) It’s not about the right answer, it’s about the explanation, follow-up questions, and understanding of the algorithm and code. And perhaps a bit about the right answer and the fluency of coding, but that’s more pass/fail than requiring tons of practice. I routinely give hints to get the candidate on the right path to remember/figure out a working solution.
It’s disturbing just how many applicants can get to an interview without having done minimal practice on those sites (or coding regularly in their previous job). It’s a necessary part of the interview, just to weed out the non-starters. For more senior roles, the design discussions and tech dives into resume topics (“I see you’ve worked with distributed caching—tell me how you managed varying invalidation needs.”) are more useful for final decisions.
More importantly, interview and hiring is a pretty small part of the career impact of a developer. In-role impact is also never 100% meritocratic, but at a lot of places is pretty good.
I have no clue if young attorneys are tested by actually knowing what cites to start with or how to prepare for a relevant case, but I kind of hope they are.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
TL;DR leetcode-style interview coding is (or should be, if done well) satisficing, not ranking. Being competent at it is just as good (possibly better, if it lets you show other strengths) as being great at it.
I would agree with this and upthread said basically the same thing. It’s a goodharted metric. Sure, if something can do it at all that’s one signal, but deciding between 10 candidates based on going to “2 mediums, 40 minutes” where only 1-2 will pass is essentially arbitrary.
The 1-2 who passed may have been better at LC than the failures, or may have been lucky this round.
Say 6 candidates finished at least 1 medium and were somewhere dealing with bugs on the second. Your test doesn’t realistically distinguish between that set of 6.
Yes, Leetcoding is essentially unrelated to actual software engineering, but isn’t one’s Leetcoding ability also an indirect test of intelligence + conscientiousness? I doubt that it’s the entire industry standard only because it’s fashionable.
Intelligence—there may be a small amount of correlation conscientiousness—no. LC is competitive programming, a sport where you memorize from a finite amount of patterns and practice sloppy methods of coding to minimize typing.
Someone did a survey of users’ IQs and CodeForce scores and the results showed that IQ might have a slight positive correlation with competitive programming ability.
So consistent with “a small amount of correlation”. The biggest correlation is competitive programming practice.
So you now have to maintain 2 skillsets—the one for the job, and a second one to remain employable.
And then each interview spend many hours “battling” for each offer, since even if you are an expert you can get unlucky or get ghosted after passing for unclear reasons that may be illegal discrimination you can’t prove.
It’s dumb.
But yeah, maybe not as bad as the lawyer interview process, where it’s basically the reputation of your school and word of mouth and how good you are at golf or something.