I’m aware of one simple technology which uses spaced repetition (items you remember or “get correct” taking longer to reappear, for optimal learning), is pre-existing and would be easy to use, which is that of flash card programs. There are a number of free ones out there, two of the best that I found are: Mnemosyne and Anki. I have been using Anki for about half a year now for learning vocabulary (largely for the GRE) and am very happy with it, wish I had discovered such programs earlier.
While they’re pre-existing and easy to use (and to share and add “cards”), two imperfections stand out. First, I’m not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the “answer side” couldn’t be customized to the selection you made. Secondly, you of course wouldn’t be able to calibrate the questions to your level as the program won’t know what you answered. You’re able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you’ve answered correctly, by your own scoring), but it’s not quite the same.
I might be wrong and some existing flash card program might allow for the selection of answers. Or perhaps more promising Anki is open-source, so perhaps with only a bit of work we could build a quiz-version.
First, I’m not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the “answer side” couldn’t be customized to the selection you made. Secondly, you of course wouldn’t be able to calibrate the questions to your level as the program won’t know what you answered. You’re able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you’ve answered correctly, by your own scoring), but it’s not quite the same.
I don’t understand. How do pseudo-multiple choice cards do any worse than a ‘genuine’ multiple-choice? And what do you mean by calibration? The calibration is done by ease of remembering, same as any card. Nothing stops you from saying (for Mnemosyne), ‘if I get within 5% of the correct value, I’ll set this at 5; 10%, 4, 20% 3, etc.’
It might well work to one’s satisfaction. What I meant by calibration is that it wouldn’t be able to give you a different new question based on what you answered; whether you get this question right or wrong, the series of questions awaiting you afterwards is exactly the same (unlike the GRE for example). And if the answer were numeric you could use an algorithm for when to repeat the card, yes. I had off hand imagined questions with discrete answers that had little distance relation.
OK… from the sound of it, it isn’t really a good application for SRS systems. (They’re focused on data, not skills—and thinking through biased problems would seem to be a skill, and something where you don’t want to memorize the answer!)
However, probably one could still do something. Mnemosyne 2.0 is slated to have extensible card types; I’ve proposed a card type which will be an arbitrary piece of Python code (since it’s running in an interpreter anyway) outputting a question and an answer.
My example was generating random questions to learn multiplication (in pseudocode, the card would look like ‘x = getRandom(); y = getRandom(); question = print x “” y; answer = print x\y;’), but I think you could also write code that generated biased questions. For example, one could test basic probability by generating 3 random choices: ‘Susan is a lawyer’ ‘Susan is a lawyer but not a sledge-racer’ ‘Susan is a lawyer from Indiana’, and seeing whether the user falls prey to the conjunction fallacy.
(As a single card, it would get pushed out to the future by the SRS algorithm pretty quickly; but to get around this you could just create 5 or 10 such cards.)
I think this is a great idea.
I’m aware of one simple technology which uses spaced repetition (items you remember or “get correct” taking longer to reappear, for optimal learning), is pre-existing and would be easy to use, which is that of flash card programs. There are a number of free ones out there, two of the best that I found are: Mnemosyne and Anki. I have been using Anki for about half a year now for learning vocabulary (largely for the GRE) and am very happy with it, wish I had discovered such programs earlier.
While they’re pre-existing and easy to use (and to share and add “cards”), two imperfections stand out. First, I’m not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the “answer side” couldn’t be customized to the selection you made. Secondly, you of course wouldn’t be able to calibrate the questions to your level as the program won’t know what you answered. You’re able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you’ve answered correctly, by your own scoring), but it’s not quite the same.
I might be wrong and some existing flash card program might allow for the selection of answers. Or perhaps more promising Anki is open-source, so perhaps with only a bit of work we could build a quiz-version.
I don’t understand. How do pseudo-multiple choice cards do any worse than a ‘genuine’ multiple-choice? And what do you mean by calibration? The calibration is done by ease of remembering, same as any card. Nothing stops you from saying (for Mnemosyne), ‘if I get within 5% of the correct value, I’ll set this at 5; 10%, 4, 20% 3, etc.’
It might well work to one’s satisfaction. What I meant by calibration is that it wouldn’t be able to give you a different new question based on what you answered; whether you get this question right or wrong, the series of questions awaiting you afterwards is exactly the same (unlike the GRE for example). And if the answer were numeric you could use an algorithm for when to repeat the card, yes. I had off hand imagined questions with discrete answers that had little distance relation.
OK… from the sound of it, it isn’t really a good application for SRS systems. (They’re focused on data, not skills—and thinking through biased problems would seem to be a skill, and something where you don’t want to memorize the answer!)
However, probably one could still do something. Mnemosyne 2.0 is slated to have extensible card types; I’ve proposed a card type which will be an arbitrary piece of Python code (since it’s running in an interpreter anyway) outputting a question and an answer.
My example was generating random questions to learn multiplication (in pseudocode, the card would look like ‘x = getRandom(); y = getRandom(); question = print x “” y; answer = print x\y;’), but I think you could also write code that generated biased questions. For example, one could test basic probability by generating 3 random choices: ‘Susan is a lawyer’ ‘Susan is a lawyer but not a sledge-racer’ ‘Susan is a lawyer from Indiana’, and seeing whether the user falls prey to the conjunction fallacy.
(As a single card, it would get pushed out to the future by the SRS algorithm pretty quickly; but to get around this you could just create 5 or 10 such cards.)