Project Follow Through, the study most frequently cited as proving the benefits of Direct Instruction is far from perfect. Neither classrooms nor schools, were randomly assigned to curricula. Its not clear how students ended up in treatment vs. comparison groups but it probably happened differently in different communities. See http://en.wikipedia.org/wiki/Project_Follow_Through#Analytical_methods for a bunch of info and more references.
Yes, Project Follow-Through had some problems, but I don’t think it’s likely that those problems provided a systematic bias towards DI sufficient to explain away the huge differences as non-significant, especially since similar results were replicated in many smaller studies that were in a situation where better random assignment etc was possible.
“Research on Direct Instruction” (Adams and Engelmann, 1996) goes into much better detail on Follow-Through and those other experiments.
Actually, it basically covers three different types of studies:
Those dealing with the relative effectiveness of DI compared to other models (in a meta-analysis)
Those pinning down the internal details of DI theory, validating unique predictions it makes (about the effect specific variations in sequencing, juxtaposition, wording, pacing, etc should have on student performance). Only one prediction ever came out differently than expected: That a sequence of examples starting with negatives would be more efficient at narrowing in on a concept for the learner. It was found that while this did hold with more sophisticated older learners, more naive younger students simply interpreted the, ‘This is not [whatever]’ to mean, ‘This is not important, so don’t attend to this’.
Those demonstrating ‘non-normative’ outcomes. For instance, calling Piagetian developmental theory into question.
You should be able to find the book at a local university library. Could you get your hands on it? I’d love to hear what you think after reading it!
Project Follow Through, the study most frequently cited as proving the benefits of Direct Instruction is far from perfect. Neither classrooms nor schools, were randomly assigned to curricula. Its not clear how students ended up in treatment vs. comparison groups but it probably happened differently in different communities. See http://en.wikipedia.org/wiki/Project_Follow_Through#Analytical_methods for a bunch of info and more references.
Yes, Project Follow-Through had some problems, but I don’t think it’s likely that those problems provided a systematic bias towards DI sufficient to explain away the huge differences as non-significant, especially since similar results were replicated in many smaller studies that were in a situation where better random assignment etc was possible.
“Research on Direct Instruction” (Adams and Engelmann, 1996) goes into much better detail on Follow-Through and those other experiments.
Actually, it basically covers three different types of studies:
Those dealing with the relative effectiveness of DI compared to other models (in a meta-analysis)
Those pinning down the internal details of DI theory, validating unique predictions it makes (about the effect specific variations in sequencing, juxtaposition, wording, pacing, etc should have on student performance). Only one prediction ever came out differently than expected: That a sequence of examples starting with negatives would be more efficient at narrowing in on a concept for the learner. It was found that while this did hold with more sophisticated older learners, more naive younger students simply interpreted the, ‘This is not [whatever]’ to mean, ‘This is not important, so don’t attend to this’.
Those demonstrating ‘non-normative’ outcomes. For instance, calling Piagetian developmental theory into question.
You should be able to find the book at a local university library. Could you get your hands on it? I’d love to hear what you think after reading it!