I think the areas least open (though still not immune) to mind-killing are:
1) better, more consistent evidence for policies (good stats rather than govts commissioning policy-based evidence)
2) developing technical systems so they work better: the more techy the better. Making computer systems for processing pensions, tax or whatever that come in on budget and on spec would be a fantastic start. Though I guess even then, a libertarian might feel that giving the state more powerful and effective systems is counter-productive.
1) better, more consistent evidence for policies (good stats rather than govts commissioning policy-based evidence)
Although when the stakes are sufficiently high (as they often are in politics) this tends to degenerate to people finding ways to bride, intimidate, or otherwise manipulate whoever is gathering and/or analyzing said evidence.
Well, everything has risks. But you can generally tell when people are doing that. And it’s harder if the evidence is systematic rather than post-hoc reviews of specific things.
I’m not sure exactly what you’re referring to, so it’s hard to respond. I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis. At least that way you can debate about the quality of the evidence.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations, not a dedicated cabal who will work together to twist data.
I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis.
True, and this is generally hard to notice if your a non-expert, it is also hard to tell who is or isn’t an expert if you’re not one. As a result people tend to go with the “official position”.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations,
True, unfortunately what tends to happen in practice is that enough people in the data pipeline manipulate the data for some reason or other that by the time the analysis is finished its correlation with reality is rather tenuous.
These are both risks. But the issue about manipulation at various points is presumably unlikely to add up to systematically misleading results: the involvement of many manipulators here would presumably create a lot of noise.
I think the areas least open (though still not immune) to mind-killing are: 1) better, more consistent evidence for policies (good stats rather than govts commissioning policy-based evidence) 2) developing technical systems so they work better: the more techy the better. Making computer systems for processing pensions, tax or whatever that come in on budget and on spec would be a fantastic start. Though I guess even then, a libertarian might feel that giving the state more powerful and effective systems is counter-productive.
Although when the stakes are sufficiently high (as they often are in politics) this tends to degenerate to people finding ways to bride, intimidate, or otherwise manipulate whoever is gathering and/or analyzing said evidence.
Well, everything has risks. But you can generally tell when people are doing that. And it’s harder if the evidence is systematic rather than post-hoc reviews of specific things.
Really, this is much harder than you seem to think.
I’m not sure exactly what you’re referring to, so it’s hard to respond. I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis. At least that way you can debate about the quality of the evidence.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations, not a dedicated cabal who will work together to twist data.
True, and this is generally hard to notice if your a non-expert, it is also hard to tell who is or isn’t an expert if you’re not one. As a result people tend to go with the “official position”.
True, unfortunately what tends to happen in practice is that enough people in the data pipeline manipulate the data for some reason or other that by the time the analysis is finished its correlation with reality is rather tenuous.
These are both risks. But the issue about manipulation at various points is presumably unlikely to add up to systematically misleading results: the involvement of many manipulators here would presumably create a lot of noise.
Not necessarily, one of the manipulators might get lucky and do something that overrides the others.