While I find I have benefitted a great deal from reading posts on OB/LW, I also feel that, given the intellectual abilities of the people involved, the site does not function as an optimally effective way to acquire the art of rationality. I agree that the wiki is a good step in the right direction, but if one of the main goals of LW is to train people to think rationally, I think LW could do more to provide resources for allowing people to bootstrap themselves up from wherever they are to master levels of rationality.
So I ask: What are the optimal software, methods, educational tools, problem sets, etc. the community could provide to help people notice and root out the biases operating in their thinking. The answer may be sources already extant, but I have a proposal.
Despite being a regular reader of OB/LW, I still feel like a novice at the art of rationality. I realize that contributing one’s ideas is an effective way to correct one’s thinking, but I often feel as though I have all these intellectual sticking points which could be rooted out quite efficiently—if only the proper tools were available. As far as my own learning methods go, assuming a realistic application of current technology, I would love something like the following:
An interactive (calibrated to respond to learner’s demonstrated level of ability—similar to the GRE) test with a set of 1000+ problems, wherein I could detect the biases operating in my thinking as I approach given questions and problems. Using such a technique, I believe I could train myself up to the point where I could more closely approximate what I remember Eliezer somewhere saying is going on when he approaches an argument: his brain is cycling through possible biases almost as automatically as it is controlling his autonomic nervous system.
[In terms of convenience, an added bonus would be to be able to look at questions through one of the standard flashcard applications available on the iphone (or other devices), so I could look at, say, a few (or a few dozen) questions whenever the urge struck me. I dream of such a tool someday even incorporating SuperMemo-type capabilities, wherein even experts are able to keep their knowledge fresh by having questions reappear based on optimal strategies for obviating long-term degradation of memories. I am interested in helping to develop such a learning tool.]
I welcome any input about how to proceed with such a plan. Although I am a PhD candidate/adjunct professor, I don’t know what the optimal technology for such a project would be. It does seem, though, that the technical demands necessary to get such a project off the ground need not be imposing.
Once such a project got off the ground, I believe the community could come together to provide effective questions and answers. As I see it, it would neither be necessary nor desirable for such a project to be created by a single person.
I believe there are many people for whom this could project could be valuable. We might find that, were such a tool to be implemented, at the very least, it might raise the level of discourse on LW. Beyond that, who knows. Thanks for your suggestions.
I’m aware of one simple technology which uses spaced repetition (items you remember or “get correct” taking longer to reappear, for optimal learning), is pre-existing and would be easy to use, which is that of flash card programs. There are a number of free ones out there, two of the best that I found are: Mnemosyne and Anki. I have been using Anki for about half a year now for learning vocabulary (largely for the GRE) and am very happy with it, wish I had discovered such programs earlier.
While they’re pre-existing and easy to use (and to share and add “cards”), two imperfections stand out. First, I’m not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the “answer side” couldn’t be customized to the selection you made. Secondly, you of course wouldn’t be able to calibrate the questions to your level as the program won’t know what you answered. You’re able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you’ve answered correctly, by your own scoring), but it’s not quite the same.
I might be wrong and some existing flash card program might allow for the selection of answers. Or perhaps more promising Anki is open-source, so perhaps with only a bit of work we could build a quiz-version.
First, I’m not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the “answer side” couldn’t be customized to the selection you made. Secondly, you of course wouldn’t be able to calibrate the questions to your level as the program won’t know what you answered. You’re able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you’ve answered correctly, by your own scoring), but it’s not quite the same.
I don’t understand. How do pseudo-multiple choice cards do any worse than a ‘genuine’ multiple-choice? And what do you mean by calibration? The calibration is done by ease of remembering, same as any card. Nothing stops you from saying (for Mnemosyne), ‘if I get within 5% of the correct value, I’ll set this at 5; 10%, 4, 20% 3, etc.’
It might well work to one’s satisfaction. What I meant by calibration is that it wouldn’t be able to give you a different new question based on what you answered; whether you get this question right or wrong, the series of questions awaiting you afterwards is exactly the same (unlike the GRE for example). And if the answer were numeric you could use an algorithm for when to repeat the card, yes. I had off hand imagined questions with discrete answers that had little distance relation.
OK… from the sound of it, it isn’t really a good application for SRS systems. (They’re focused on data, not skills—and thinking through biased problems would seem to be a skill, and something where you don’t want to memorize the answer!)
However, probably one could still do something. Mnemosyne 2.0 is slated to have extensible card types; I’ve proposed a card type which will be an arbitrary piece of Python code (since it’s running in an interpreter anyway) outputting a question and an answer.
My example was generating random questions to learn multiplication (in pseudocode, the card would look like ‘x = getRandom(); y = getRandom(); question = print x “” y; answer = print x\y;’), but I think you could also write code that generated biased questions. For example, one could test basic probability by generating 3 random choices: ‘Susan is a lawyer’ ‘Susan is a lawyer but not a sledge-racer’ ‘Susan is a lawyer from Indiana’, and seeing whether the user falls prey to the conjunction fallacy.
(As a single card, it would get pushed out to the future by the SRS algorithm pretty quickly; but to get around this you could just create 5 or 10 such cards.)
While I find I have benefitted a great deal from reading posts on OB/LW, I also feel that, given the intellectual abilities of the people involved, the site does not function as an optimally effective way to acquire the art of rationality. I agree that the wiki is a good step in the right direction, but if one of the main goals of LW is to train people to think rationally, I think LW could do more to provide resources for allowing people to bootstrap themselves up from wherever they are to master levels of rationality.
So I ask: What are the optimal software, methods, educational tools, problem sets, etc. the community could provide to help people notice and root out the biases operating in their thinking. The answer may be sources already extant, but I have a proposal.
Despite being a regular reader of OB/LW, I still feel like a novice at the art of rationality. I realize that contributing one’s ideas is an effective way to correct one’s thinking, but I often feel as though I have all these intellectual sticking points which could be rooted out quite efficiently—if only the proper tools were available. As far as my own learning methods go, assuming a realistic application of current technology, I would love something like the following:
An interactive (calibrated to respond to learner’s demonstrated level of ability—similar to the GRE) test with a set of 1000+ problems, wherein I could detect the biases operating in my thinking as I approach given questions and problems. Using such a technique, I believe I could train myself up to the point where I could more closely approximate what I remember Eliezer somewhere saying is going on when he approaches an argument: his brain is cycling through possible biases almost as automatically as it is controlling his autonomic nervous system.
[In terms of convenience, an added bonus would be to be able to look at questions through one of the standard flashcard applications available on the iphone (or other devices), so I could look at, say, a few (or a few dozen) questions whenever the urge struck me. I dream of such a tool someday even incorporating SuperMemo-type capabilities, wherein even experts are able to keep their knowledge fresh by having questions reappear based on optimal strategies for obviating long-term degradation of memories. I am interested in helping to develop such a learning tool.]
I welcome any input about how to proceed with such a plan. Although I am a PhD candidate/adjunct professor, I don’t know what the optimal technology for such a project would be. It does seem, though, that the technical demands necessary to get such a project off the ground need not be imposing.
Once such a project got off the ground, I believe the community could come together to provide effective questions and answers. As I see it, it would neither be necessary nor desirable for such a project to be created by a single person.
I believe there are many people for whom this could project could be valuable. We might find that, were such a tool to be implemented, at the very least, it might raise the level of discourse on LW. Beyond that, who knows. Thanks for your suggestions.
Worth a top level post.
I think this is a great idea.
I’m aware of one simple technology which uses spaced repetition (items you remember or “get correct” taking longer to reappear, for optimal learning), is pre-existing and would be easy to use, which is that of flash card programs. There are a number of free ones out there, two of the best that I found are: Mnemosyne and Anki. I have been using Anki for about half a year now for learning vocabulary (largely for the GRE) and am very happy with it, wish I had discovered such programs earlier.
While they’re pre-existing and easy to use (and to share and add “cards”), two imperfections stand out. First, I’m not aware of any functionality that lets you actually select an answer. You could look at the question and possible answers and then pick one mentally, but the “answer side” couldn’t be customized to the selection you made. Secondly, you of course wouldn’t be able to calibrate the questions to your level as the program won’t know what you answered. You’re able to select to repeat an item very soon, or after short, medium or long intervals (relative to how many times you’ve answered correctly, by your own scoring), but it’s not quite the same.
I might be wrong and some existing flash card program might allow for the selection of answers. Or perhaps more promising Anki is open-source, so perhaps with only a bit of work we could build a quiz-version.
I don’t understand. How do pseudo-multiple choice cards do any worse than a ‘genuine’ multiple-choice? And what do you mean by calibration? The calibration is done by ease of remembering, same as any card. Nothing stops you from saying (for Mnemosyne), ‘if I get within 5% of the correct value, I’ll set this at 5; 10%, 4, 20% 3, etc.’
It might well work to one’s satisfaction. What I meant by calibration is that it wouldn’t be able to give you a different new question based on what you answered; whether you get this question right or wrong, the series of questions awaiting you afterwards is exactly the same (unlike the GRE for example). And if the answer were numeric you could use an algorithm for when to repeat the card, yes. I had off hand imagined questions with discrete answers that had little distance relation.
OK… from the sound of it, it isn’t really a good application for SRS systems. (They’re focused on data, not skills—and thinking through biased problems would seem to be a skill, and something where you don’t want to memorize the answer!)
However, probably one could still do something. Mnemosyne 2.0 is slated to have extensible card types; I’ve proposed a card type which will be an arbitrary piece of Python code (since it’s running in an interpreter anyway) outputting a question and an answer.
My example was generating random questions to learn multiplication (in pseudocode, the card would look like ‘x = getRandom(); y = getRandom(); question = print x “” y; answer = print x\y;’), but I think you could also write code that generated biased questions. For example, one could test basic probability by generating 3 random choices: ‘Susan is a lawyer’ ‘Susan is a lawyer but not a sledge-racer’ ‘Susan is a lawyer from Indiana’, and seeing whether the user falls prey to the conjunction fallacy.
(As a single card, it would get pushed out to the future by the SRS algorithm pretty quickly; but to get around this you could just create 5 or 10 such cards.)