Testing and credentialism is a mess. The basic problem is that it’s unclear what the result should measure: how much the student knows, how much the student has learned, how intelligent the student is, how conscientious, or how well the student’s capabilities line up with the topic. The secondary problem is that in most settings, the test should be both hard-to-game AND perfectly objective, such that there is no argument about correctness of answer (and such that grading can be done quickly).
I spend a lot of time interviewing and training interviewers for tech jobs. This doesn’t have the first problem: we have a clear goal (determine whether the candidate is likely to perform well in the role, usually tested by solving similar problems as would be faced in the role). The second difficulty is similar—a good interview generates actual evidence of the candidate’s likely success, not just domain knowledge. This takes a lot of interviewing skill to get the best from the candidate, and a lot of judgement in how to evaluate the approach and weigh the various aspects tested. We put a lot of time into this, and accept the judgement aspect rather than trying to reduce the time spent, automate the results, or be purely objective in assessment.
Testing and credentialism is a mess. The basic problem is that it’s unclear what the result should measure: how much the student knows, how much the student has learned, how intelligent the student is, how conscientious, or how well the student’s capabilities line up with the topic. The secondary problem is that in most settings, the test should be both hard-to-game AND perfectly objective, such that there is no argument about correctness of answer (and such that grading can be done quickly).
I spend a lot of time interviewing and training interviewers for tech jobs. This doesn’t have the first problem: we have a clear goal (determine whether the candidate is likely to perform well in the role, usually tested by solving similar problems as would be faced in the role). The second difficulty is similar—a good interview generates actual evidence of the candidate’s likely success, not just domain knowledge. This takes a lot of interviewing skill to get the best from the candidate, and a lot of judgement in how to evaluate the approach and weigh the various aspects tested. We put a lot of time into this, and accept the judgement aspect rather than trying to reduce the time spent, automate the results, or be purely objective in assessment.