The most I’ve been doing is poking at a post tentatively titled “A dry introduction to the empirical evidence on DI’s effectiveness”, essentially a summary of “Research on Direct Instruction”, since I was feeling like maybe the best thing to do would be to take a step back and present a better explanation of why people should be interested in the theory before explaining the theory itself. (Yes? No? Maybe?)
Personally, I’m eager to actually use DI more in my own learning, so I’m currently working through Theory of Instruction. But some better evidence than PFT would be nice, yes. Especially if it isn’t always about basic skills. (Because otherwise, no matter how good the technique, I won’t benefit from it.)
Interestingly enough, the study with the highest effect size in the meta-analysis (2.44) involved non-basic skills. Actually I think I’ll just type up the summary:
This study analyzes the use of the Earth Science videodisc program with elementary education majors who traditionally have had negative attitudes towards science teaching. One group received the DI program and the other group received the traditional approach [random assignment, of course] during a one-semester science course. The DI group had significantly higher posttest knowledge scores (91%) and higher confidence in their understanding of science knowledge and ability to teach science.
Cited as:
“Vitale, M. & Romance, N. (1992). Using videodisc instruction in an elementary science methods course: Remediating science knowledge deficiencies and facilitating science teaching. Journal of Research in Science teaching, 29, 915-928.”
Not that I’ve dug up the original paper myself yet.
But one of my favorites was a study that didn’t use random assignment, but actually compared the performance of two groups of high school students: AP kids (doing whatever they normally do to study), and kids with performance previously in the lower two quartiles (taught through a videodisc course on “Chemistry and energy”). Both groups then took the same test.
Results as a researcher reported informally outside the study: “The experimentals whumped the AP students on all topics related to what was covered by the videodiscs of our course.”
(This one wasn’t included in the meta-analysis, so I’ll have to try to dig up the reference later.)
Personally, I’m eager to actually use DI more in my own learning, so I’m currently working through Theory of Instruction. But some better evidence than PFT would be nice, yes. Especially if it isn’t always about basic skills. (Because otherwise, no matter how good the technique, I won’t benefit from it.)
Interestingly enough, the study with the highest effect size in the meta-analysis (2.44) involved non-basic skills. Actually I think I’ll just type up the summary:
Cited as:
Not that I’ve dug up the original paper myself yet.
But one of my favorites was a study that didn’t use random assignment, but actually compared the performance of two groups of high school students: AP kids (doing whatever they normally do to study), and kids with performance previously in the lower two quartiles (taught through a videodisc course on “Chemistry and energy”). Both groups then took the same test.
Results as a researcher reported informally outside the study: “The experimentals whumped the AP students on all topics related to what was covered by the videodiscs of our course.”
(This one wasn’t included in the meta-analysis, so I’ll have to try to dig up the reference later.)