Averages are more common than individualized approaches because they’re easy, and often good enough. Individualizing education according to Rose’s proposals is an attractive vision, and seems attainable, but it also represents a costly investment. I expect that in many cases, there is a benefit to improving the granularity of our data and using that to make better decisions. The question, in each case, is whether or not the benefit is worth the cost, and how to execute it successfully.
Fortunately, we don’t have to go all the way to individualizing everything in order to reap benefits. Averaging is a way of simplifying complex data. Fortunately, there are much more sophisticated statistical techniques readily available. We can identify clusters, we can do principle component analysis, and so on. If we wanted to find a more realistic average, we can think in terms of a “medoid,” which is the real data point nearest the mean of the cluster.
My guess is that bringing more sophisticated statistical thinking to real-world problem is partly a coordination problem. Non-statisticians don’t really understand how to do statistical thinking, and may simply not be able to afford a specialist; yet they are required to make decisions based on statistics regularly. Statisticians may in turn not have the power or expertise to translate their statistical insights into cost/benefit analysis and implement a plan of action. Unfortunately, the result is that important decisions are made by people with minimal statistical knowledge, who default to averages and making faulty assumptions based on correlations. It’s “folk statistics,” and it’s everywhere.
Averages are more common than individualized approaches because they’re easy, and often good enough. Individualizing education according to Rose’s proposals is an attractive vision, and seems attainable, but it also represents a costly investment. I expect that in many cases, there is a benefit to improving the granularity of our data and using that to make better decisions. The question, in each case, is whether or not the benefit is worth the cost, and how to execute it successfully.
Fortunately, we don’t have to go all the way to individualizing everything in order to reap benefits. Averaging is a way of simplifying complex data. Fortunately, there are much more sophisticated statistical techniques readily available. We can identify clusters, we can do principle component analysis, and so on. If we wanted to find a more realistic average, we can think in terms of a “medoid,” which is the real data point nearest the mean of the cluster.
My guess is that bringing more sophisticated statistical thinking to real-world problem is partly a coordination problem. Non-statisticians don’t really understand how to do statistical thinking, and may simply not be able to afford a specialist; yet they are required to make decisions based on statistics regularly. Statisticians may in turn not have the power or expertise to translate their statistical insights into cost/benefit analysis and implement a plan of action. Unfortunately, the result is that important decisions are made by people with minimal statistical knowledge, who default to averages and making faulty assumptions based on correlations. It’s “folk statistics,” and it’s everywhere.