If you’re after feedback-for-understanding, providing a student with a list of questions they got wrong and a good solutions manual (which you only have to write once) works most of the time (my guess is around 90% of the time, but I have low confidence in my estimates because I’m capable of successfully working through entire textbooks’ worth of material and needing no human feedback, which I’m told is not often the case). Doing this should be more effective than having the error explained outright a la generation effect.
Another interesting result is that the best feedback for fostering understanding often comes not from experts, who have such a deep degree of understanding and automaticity that it impairs their ability to simulate and communicate with minds struggling with new material, but from students who just learned the material. There’s a risk of students who believe the right thing for the wrong reason propagating their misunderstanding, but I think that pairing up a student who’s struggling with some concept (i.e., throwing a solutions manual at them hasn’t helped them bridge the conceptual gap that caused them to get the question wrong) with a student who understands it is often helpful. IIRC, Sal Khan described using this technique with some success in his book; a friend/mentor who teaches secondary math and keeps up with the literature tells me this works; and I’ve used this basic technique doing an enrichment afterschool program for the local Mathcounts team after the season had ended and can only describe its efficacy as “definitely witchcraft”.
I think there’s a place for graders to give detailed feedback to bad answers, but most of the time, it’s better to force students to do the work themselves and locate their own errors/conceptual gaps, and in most of the remaining cases, to pawn off the responsibility to students (this could be construed as teachers being lazy, but it’s also what, to my knowledge, produces the best learning outcomes). Since detailed feedback is only desirable after two rounds of other approaches that (in my deeply nonrepresentative experience) usually work, I don’t think it makes sense to produce detailed feedback to every wrong answer.
Then again, I don’t fully understand what context you’re thinking in. In my original post, I was thinking about purely diagnostic math tests given to postsecondary students for employers that wouldn’t so much as tell students which questions they got wrong, along the lines of the Royal Statistical Society’s Graduate Diploma (five three-hour tests which grant a credential equivalent to a “good UK honours degree”). In writing this, I’m mostly imagining standardized math tests for secondary students in America (which, I’m given to understand, already have written components), which currently don’t give per-question feedback, but changing that is much less of a pipe dream than creating tests that effectively test understanding. Come to think of it, I think the above approach applies even better to classroom instructors giving their own tests, at either the secondary or postsecondary level.
Tangentially related: the best professor I ever had would type 3–4 pages of general commentary (common errors and why they were wrong and how to do them better, as well as things the class did well) for the class after every problem set and test, generally by the next class. I found this commentary was extraordinarily helpful, not just because of feedback, but because (a) it helped dispel the misperception that everyone else understood everything and I was struggling because I was stupid, (b) taught us to discriminate between bad, mediocre, and good work, and (c) comments like “most of you did [x], which was suboptimal because of [y], but one of you did [z], which takes a bit more work but is a better approach because [~y]” really drove me to not do the minimum amount of work to get an answer when I could do a bit more work to get a stronger solution. (The course was in numerical methods so, as an example, we once had a problem where we had to use some technique where error exploded (I’ve now forgotten since I didn’t have Anki back then) to locate a typo in some numeric data. A sufficient answer would have been to identify the incorrect entry; a stronger answer was to identify the incorrect entry, figure out the error (two digits typed in the wrong order), and demonstrate that fixing the error caused explosions to not happen.)
Another interesting result is that the best feedback for fostering understanding often comes not from experts, who have such a deep degree of understanding and automaticity that it impairs their ability to simulate and communicate with minds struggling with new material, but from students who just learned the material
The core material for teaching is not the subject to be taught, but human confusions about that subject.
providing a student with a list of questions they got wrong and a good solutions manual … works most of the time … I’m capable of successfully working through entire textbooks’ worth of material and needing no human feedback, which I’m told is not often the case
That’s a very important point. My impression is that people can be divided into two general categories—those who learn best by themselves; and those who learn best when being taught by someone.
I suspect that most people on LW prefer to inhale textbooks on their own. I also suspect that most people outside of LW prefer to have a teacher guide them.
You pay them. You also tell them that their job is to identify good answers, not to give detailed feedback to bad answers.
If your goal is to foster understanding instead of giving canned answers, this seems counterproductive.
If you’re after feedback-for-understanding, providing a student with a list of questions they got wrong and a good solutions manual (which you only have to write once) works most of the time (my guess is around 90% of the time, but I have low confidence in my estimates because I’m capable of successfully working through entire textbooks’ worth of material and needing no human feedback, which I’m told is not often the case). Doing this should be more effective than having the error explained outright a la generation effect.
Another interesting result is that the best feedback for fostering understanding often comes not from experts, who have such a deep degree of understanding and automaticity that it impairs their ability to simulate and communicate with minds struggling with new material, but from students who just learned the material. There’s a risk of students who believe the right thing for the wrong reason propagating their misunderstanding, but I think that pairing up a student who’s struggling with some concept (i.e., throwing a solutions manual at them hasn’t helped them bridge the conceptual gap that caused them to get the question wrong) with a student who understands it is often helpful. IIRC, Sal Khan described using this technique with some success in his book; a friend/mentor who teaches secondary math and keeps up with the literature tells me this works; and I’ve used this basic technique doing an enrichment afterschool program for the local Mathcounts team after the season had ended and can only describe its efficacy as “definitely witchcraft”.
I think there’s a place for graders to give detailed feedback to bad answers, but most of the time, it’s better to force students to do the work themselves and locate their own errors/conceptual gaps, and in most of the remaining cases, to pawn off the responsibility to students (this could be construed as teachers being lazy, but it’s also what, to my knowledge, produces the best learning outcomes). Since detailed feedback is only desirable after two rounds of other approaches that (in my deeply nonrepresentative experience) usually work, I don’t think it makes sense to produce detailed feedback to every wrong answer.
Then again, I don’t fully understand what context you’re thinking in. In my original post, I was thinking about purely diagnostic math tests given to postsecondary students for employers that wouldn’t so much as tell students which questions they got wrong, along the lines of the Royal Statistical Society’s Graduate Diploma (five three-hour tests which grant a credential equivalent to a “good UK honours degree”). In writing this, I’m mostly imagining standardized math tests for secondary students in America (which, I’m given to understand, already have written components), which currently don’t give per-question feedback, but changing that is much less of a pipe dream than creating tests that effectively test understanding. Come to think of it, I think the above approach applies even better to classroom instructors giving their own tests, at either the secondary or postsecondary level.
Tangentially related: the best professor I ever had would type 3–4 pages of general commentary (common errors and why they were wrong and how to do them better, as well as things the class did well) for the class after every problem set and test, generally by the next class. I found this commentary was extraordinarily helpful, not just because of feedback, but because (a) it helped dispel the misperception that everyone else understood everything and I was struggling because I was stupid, (b) taught us to discriminate between bad, mediocre, and good work, and (c) comments like “most of you did [x], which was suboptimal because of [y], but one of you did [z], which takes a bit more work but is a better approach because [~y]” really drove me to not do the minimum amount of work to get an answer when I could do a bit more work to get a stronger solution. (The course was in numerical methods so, as an example, we once had a problem where we had to use some technique where error exploded (I’ve now forgotten since I didn’t have Anki back then) to locate a typo in some numeric data. A sufficient answer would have been to identify the incorrect entry; a stronger answer was to identify the incorrect entry, figure out the error (two digits typed in the wrong order), and demonstrate that fixing the error caused explosions to not happen.)
The core material for teaching is not the subject to be taught, but human confusions about that subject.
That’s a very important point. My impression is that people can be divided into two general categories—those who learn best by themselves; and those who learn best when being taught by someone.
I suspect that most people on LW prefer to inhale textbooks on their own. I also suspect that most people outside of LW prefer to have a teacher guide them.
Fair point—I’d spaced out on this being for a class rather than an employer looking for clueful people.