If students could always get away with an “I don’t know” they wouldn’t have much incentive to learn anything.
More importantly, the school system main purpose is not to teach you just a collection of facts. It has to teach you how to behave in the world, where you often have to make choices based on incomplete information.
Students who do not care about education do get away with not knowing anything. Detention is not much of a punishment when you don’t show up.
It is difficult to prevent a student who cares deeply about eduction from admitting ignorance, since admitting ignorance is necessary in asking for explanations. The difficult task is persuading students who care about doing well to seek knowledge, rather than good marks. These students are not motivated enough to learn of their own accord—they never volunteer answers or ask questions openly, because they care more about not being thought ignorant (or, of course, keen) than about not being ignorant.
The point is not to allow students to “get away with” admitting ignorance. There is a vast difference between not knowing the answer and not wanting to know. Personally, I have never found it hard to tell the difference between students who don’t want to know and students who don’t want to be judged by their peers.
It has to teach you how to behave in the world, where you often have to make choices based on incomplete information.
It is very rarely a bad idea to publicly admit that you might be wrong, especially when you are guessing. A school that does not teach the importance of separating your beliefs and your ego has failed miserably. Whatever else it has taught, it has not taught its students how to learn.
This is, in fact, close to being the worst system ever devised. The fact that something is widely used does not mean that it is any good. Examining the results of this kind of system shows that, when applied to unfamilliar material, they consistently give the best marks to the worst students. If the best students can’t do every problem with extreme ease, they tend to venture answers where poor students do not. This results in the best students dropping towards the median score and the highest scores going to poor students who were lucky. Applying the system to familliar material should produce a similar, though less pronounced, effect. Adding penalties lowers the dispersion about the mean, which always makes an exam less useful.
Exam systems that have no penalty for wrong answers are better than ones that do, but are still imperfect. The only reliable way to guage students ability is to have far more questions (preferably taken as several papers), to reduce the effect of mistakes relative to ignorance and to increase the number of areas examined. This is generally cost-prohibitive. It also tests students’ ability to answer exam questions, rather than testing their understanding. There is, fortunately, a way to test understanding—a student understands material when they can rediscover the ideas that draw on it.
This is, in fact, close to being the worst system ever devised.
Not really- it teaches calibration as well as correctness. Are you more than 50% sure? No? Then don’t guess.
In fact, it shares several properties with the best system ever devised (for multiple choice questions, at least): the test-taker assigns a probability to each of the answers (and the total probability doled out must sum to one), and is graded based on the logarithm of the probability they assigned to the correct answer. (Typically, there’s an offset so that assigning equal probability to all possibilities gives a score of 0, so that it is possible to get positive points.)
Examining the results of this kind of system shows that, when applied to unfamilliar material, they consistently give the best marks to the worst students.
Do you have linkable results? My experience with the probability log-scoring is that, even on the first test, the median score is somewhat better than 0, there are several negative scorers, but the test-takers who received the best marks (who are both high-accuracy and high-calibration) are noticeably different from the pack, and are hardly the worst students.
The worst marks often go to students whose accuracy is high, but whose calibration is low, but that goes away once they learn calibration, which seems like a feature, not a bug.
If the best students can’t do every problem with extreme ease, they tend to venture answers where poor students do not. This results in the best students dropping towards the median score and the highest scores going to poor students who were lucky.
How can poor students get lucky if they don’t venture answers to questions where they are not sure?
The only reliable way to guage students ability is to have far more questions (preferably taken as several papers), to reduce the effect of mistakes relative to ignorance and to increase the number of areas examined.
The trouble with this approach is that you then are also grading speed and resistance to mental fatigue. In some cases, that is desirable; in others, not.
-x points for an incorrect answer with certainty x +2x points for the correct answer with certainty x
Alternately, +10^x points for a correct answer with certainty x, and +Log(1-x) points for the incorrect answer. This encourages an attempt to answer every question, even if the certainty is rated as 0.
If you give the student -X points for an incorrect answer with certainty X, and +2X points for a correct answer with certainty X, the expected value of giving an answer and lying about its certainty as Y is (1-X)(-Y) + (X)(2Y) = 3XY—Y. If X is less than 1⁄3, the student should lie and claim that his certainty is 0, while if X is greater than 1⁄3, he should lie and claim that his certainty is 1.
I’m not going to try to find the maximum for the second version, but it should be obvious that the student is still better off lying about his true certainty. Of course, you could just avoid telling the student how you’re going to grade, but the score will then just depend on how well the student guesses your grading criteria.
Neither of my described systems are ideal. Squared error works for binary questions, but it would reward “Pi is exactly 3, with 0 confidence”.
Rather than allow continuous estimates of accuracy, I think that the ideal system would ask the student to provide a range of confidence, (five choices from “guessing” to “Certain”, with equivalent probabilities), and an appropriate scoring rule; a guess would be penalized 0 for being wrong but gain little for being right, and going from “almost certain” to “certain” would add a small value to a correct answer but a large penalty to a wrong answer.
Having established the +points for correct and -points for wrong for each confidence description, do the math to determine what the actual ranges of confidence are, sanity check them against the descriptions, and then tell the student the confidence intervals. (Alternately, pick the intervals and terms and do the math to figure out the + for correct answer and -for incorrect answer for those intervals.)
and going from “almost certain” to “certain” would add a small value to a correct answer but a large penalty to a wrong answer.
It’s hard to come up with a system where the student doesn’t benefit from lying about his certainty. What you describe would fix the case from 4 (almost certain) to 5 (certain), but you need to get all the cases to work and it’s plausible that fixing the 4 to 5 case (and, in general, increasing the incentive to pick 4) breaks the 3 to 4 case.
After all, you can’t have all the transitions between certainty values add a small value to a correct answer. You must have a transition where a large value is added for a correct answer and your system may break down around such transitions.
That would mean a large value would be added when going from “guess” to “almost guess”, which would mean that it would be beneficial for a student to lie and claim to almost guess when he’s really completely guessing.
Suppose the student thinks that there is a 10% chance that he is right, and the reward structure is +5/-1 for confidence interval 1.
In fact, make the reward structure:(right/wrong) 1⁄0, 6/-1, 10/-3, 13/-6, 15/-10, 16/-15
That puts the breakpoints at roughly even intervals, keeps the math easy, and with a little bit of clarifying exactly where the breakpoints are, doesn’t reward someone who accurately determines their accuracy and then lies about it.
I sat down late last night trying to prove that this couldn’t work and instead proved that it could. If I did this correctly, in order for it to work, with the confidences increasing from 0 to 1,
left side confidence ⇐ (difference in Y)/(difference in X + difference in Y)
right side confidence >= (difference in Y)/(difference in X + difference in Y).
Differences in X are 5, 4, 3, 2, 1 and differences in Y are 1, 2, 3, 4, 5 leading to values of 1⁄6 through 5⁄6; as 0 < 1⁄6 < 1⁄5 < 2⁄6 < 2⁄5 < 3⁄6 < 3⁄5 < 4⁄6 < 4⁄5 < 5⁄6 < 1 this is immune to lying within a single interval (and also turns out to be so for multiple intervals).
So, what are the downsides of making this a grading standard? The biggest one I see is that it would be unfair except in classes that have as prerequisites an outstanding score in a class that covers credence calibration.
If students could always get away with an “I don’t know” they wouldn’t have much incentive to learn anything.
More importantly, the school system main purpose is not to teach you just a collection of facts. It has to teach you how to behave in the world, where you often have to make choices based on incomplete information.
Students who do not care about education do get away with not knowing anything. Detention is not much of a punishment when you don’t show up.
It is difficult to prevent a student who cares deeply about eduction from admitting ignorance, since admitting ignorance is necessary in asking for explanations. The difficult task is persuading students who care about doing well to seek knowledge, rather than good marks. These students are not motivated enough to learn of their own accord—they never volunteer answers or ask questions openly, because they care more about not being thought ignorant (or, of course, keen) than about not being ignorant.
The point is not to allow students to “get away with” admitting ignorance. There is a vast difference between not knowing the answer and not wanting to know. Personally, I have never found it hard to tell the difference between students who don’t want to know and students who don’t want to be judged by their peers.
It is very rarely a bad idea to publicly admit that you might be wrong, especially when you are guessing. A school that does not teach the importance of separating your beliefs and your ego has failed miserably. Whatever else it has taught, it has not taught its students how to learn.
0 marks for “I don’t know”. 1 mark for a correct answer. −1 mark for an incorrect answer.
Not only is it a simple incentive system I’ve done exams that implemented similar systems. (Westpac math competition for example.)
That is a sensible scoring system which is in fact widely used.
This is, in fact, close to being the worst system ever devised. The fact that something is widely used does not mean that it is any good. Examining the results of this kind of system shows that, when applied to unfamilliar material, they consistently give the best marks to the worst students. If the best students can’t do every problem with extreme ease, they tend to venture answers where poor students do not. This results in the best students dropping towards the median score and the highest scores going to poor students who were lucky. Applying the system to familliar material should produce a similar, though less pronounced, effect. Adding penalties lowers the dispersion about the mean, which always makes an exam less useful.
Exam systems that have no penalty for wrong answers are better than ones that do, but are still imperfect. The only reliable way to guage students ability is to have far more questions (preferably taken as several papers), to reduce the effect of mistakes relative to ignorance and to increase the number of areas examined. This is generally cost-prohibitive. It also tests students’ ability to answer exam questions, rather than testing their understanding. There is, fortunately, a way to test understanding—a student understands material when they can rediscover the ideas that draw on it.
Not really- it teaches calibration as well as correctness. Are you more than 50% sure? No? Then don’t guess.
In fact, it shares several properties with the best system ever devised (for multiple choice questions, at least): the test-taker assigns a probability to each of the answers (and the total probability doled out must sum to one), and is graded based on the logarithm of the probability they assigned to the correct answer. (Typically, there’s an offset so that assigning equal probability to all possibilities gives a score of 0, so that it is possible to get positive points.)
Do you have linkable results? My experience with the probability log-scoring is that, even on the first test, the median score is somewhat better than 0, there are several negative scorers, but the test-takers who received the best marks (who are both high-accuracy and high-calibration) are noticeably different from the pack, and are hardly the worst students.
The worst marks often go to students whose accuracy is high, but whose calibration is low, but that goes away once they learn calibration, which seems like a feature, not a bug.
How can poor students get lucky if they don’t venture answers to questions where they are not sure?
The trouble with this approach is that you then are also grading speed and resistance to mental fatigue. In some cases, that is desirable; in others, not.
Allow both an answer and a certainty.
-x points for an incorrect answer with certainty x
+2x points for the correct answer with certainty x
Alternately, +10^x points for a correct answer with certainty x, and +Log(1-x) points for the incorrect answer. This encourages an attempt to answer every question, even if the certainty is rated as 0.
Yes, I know, old post.
If you give the student -X points for an incorrect answer with certainty X, and +2X points for a correct answer with certainty X, the expected value of giving an answer and lying about its certainty as Y is (1-X)(-Y) + (X)(2Y) = 3XY—Y. If X is less than 1⁄3, the student should lie and claim that his certainty is 0, while if X is greater than 1⁄3, he should lie and claim that his certainty is 1.
I’m not going to try to find the maximum for the second version, but it should be obvious that the student is still better off lying about his true certainty. Of course, you could just avoid telling the student how you’re going to grade, but the score will then just depend on how well the student guesses your grading criteria.
Neither of my described systems are ideal. Squared error works for binary questions, but it would reward “Pi is exactly 3, with 0 confidence”.
Rather than allow continuous estimates of accuracy, I think that the ideal system would ask the student to provide a range of confidence, (five choices from “guessing” to “Certain”, with equivalent probabilities), and an appropriate scoring rule; a guess would be penalized 0 for being wrong but gain little for being right, and going from “almost certain” to “certain” would add a small value to a correct answer but a large penalty to a wrong answer.
Having established the +points for correct and -points for wrong for each confidence description, do the math to determine what the actual ranges of confidence are, sanity check them against the descriptions, and then tell the student the confidence intervals. (Alternately, pick the intervals and terms and do the math to figure out the + for correct answer and -for incorrect answer for those intervals.)
It’s hard to come up with a system where the student doesn’t benefit from lying about his certainty. What you describe would fix the case from 4 (almost certain) to 5 (certain), but you need to get all the cases to work and it’s plausible that fixing the 4 to 5 case (and, in general, increasing the incentive to pick 4) breaks the 3 to 4 case.
After all, you can’t have all the transitions between certainty values add a small value to a correct answer. You must have a transition where a large value is added for a correct answer and your system may break down around such transitions.
The largest value would be added for the first confidence interval, which would also add the smallest cost to being wrong with that confidence.
That would mean a large value would be added when going from “guess” to “almost guess”, which would mean that it would be beneficial for a student to lie and claim to almost guess when he’s really completely guessing.
Suppose the student thinks that there is a 10% chance that he is right, and the reward structure is +5/-1 for confidence interval 1.
In fact, make the reward structure:(right/wrong) 1⁄0, 6/-1, 10/-3, 13/-6, 15/-10, 16/-15
That puts the breakpoints at roughly even intervals, keeps the math easy, and with a little bit of clarifying exactly where the breakpoints are, doesn’t reward someone who accurately determines their accuracy and then lies about it.
I sat down late last night trying to prove that this couldn’t work and instead proved that it could. If I did this correctly, in order for it to work, with the confidences increasing from 0 to 1,
left side confidence ⇐ (difference in Y)/(difference in X + difference in Y)
right side confidence >= (difference in Y)/(difference in X + difference in Y).
Differences in X are 5, 4, 3, 2, 1 and differences in Y are 1, 2, 3, 4, 5 leading to values of 1⁄6 through 5⁄6; as 0 < 1⁄6 < 1⁄5 < 2⁄6 < 2⁄5 < 3⁄6 < 3⁄5 < 4⁄6 < 4⁄5 < 5⁄6 < 1 this is immune to lying within a single interval (and also turns out to be so for multiple intervals).
So, what are the downsides of making this a grading standard? The biggest one I see is that it would be unfair except in classes that have as prerequisites an outstanding score in a class that covers credence calibration.