Many of our public policies are proposed by experts who are comfortable only with correlations that can be measured, appropriated and quantified, and ignore everything else.
I would love to live in this alternate reality where “our public policies” are driven by dispassionate experts who actually pay attention to real-world data.
I think it’s quite a complex issue. Obviously politics has a raw emotional component to it, and other non-rational components too, come to that. But overly fixating on things that we can easily target, test and report on has its risks. You can end up privileging the results of a certain measure or test simply because you can get lots of data on that.
This then produces problems such as action aimed at the artificial target (waiting lists, SAT scores) rather than the underlying issue. The solution to this could well be better, more nuanced targets, of course. You could avoid artificial incentives by just not telling people what the metric is, but you’d get a lot of accusations of unfairness and it would be open to changing the method in light of what actually occured to bend the results. It would fail in the face of the drive for transparency too, at least in the UK.
Maybe policies end up looking over-quantified and over-rationalised to some and arbitary and irrational to others are policies where the ‘making sure there’s an evidence base’ work is done in by people who know what conclusion they want or even simply AFTER the ‘making the policy’ work. If this happens, then it requires a lot of intellectual honesty (and acumen) to do anything else than create a cargo cult of real evidence. Real evidence comes in many flavours, but if you want to make something look like evidence, spreadsheets and statistics help a lot.
I would love to live in this alternate reality where “our public policies” are driven by dispassionate experts who actually pay attention to real-world data.
I think it’s quite a complex issue. Obviously politics has a raw emotional component to it, and other non-rational components too, come to that. But overly fixating on things that we can easily target, test and report on has its risks. You can end up privileging the results of a certain measure or test simply because you can get lots of data on that.
This then produces problems such as action aimed at the artificial target (waiting lists, SAT scores) rather than the underlying issue. The solution to this could well be better, more nuanced targets, of course. You could avoid artificial incentives by just not telling people what the metric is, but you’d get a lot of accusations of unfairness and it would be open to changing the method in light of what actually occured to bend the results. It would fail in the face of the drive for transparency too, at least in the UK.
Maybe policies end up looking over-quantified and over-rationalised to some and arbitary and irrational to others are policies where the ‘making sure there’s an evidence base’ work is done in by people who know what conclusion they want or even simply AFTER the ‘making the policy’ work. If this happens, then it requires a lot of intellectual honesty (and acumen) to do anything else than create a cargo cult of real evidence. Real evidence comes in many flavours, but if you want to make something look like evidence, spreadsheets and statistics help a lot.