Well, everything has risks. But you can generally tell when people are doing that. And it’s harder if the evidence is systematic rather than post-hoc reviews of specific things.
I’m not sure exactly what you’re referring to, so it’s hard to respond. I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis. At least that way you can debate about the quality of the evidence.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations, not a dedicated cabal who will work together to twist data.
I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis.
True, and this is generally hard to notice if your a non-expert, it is also hard to tell who is or isn’t an expert if you’re not one. As a result people tend to go with the “official position”.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations,
True, unfortunately what tends to happen in practice is that enough people in the data pipeline manipulate the data for some reason or other that by the time the analysis is finished its correlation with reality is rather tenuous.
These are both risks. But the issue about manipulation at various points is presumably unlikely to add up to systematically misleading results: the involvement of many manipulators here would presumably create a lot of noise.
Well, everything has risks. But you can generally tell when people are doing that. And it’s harder if the evidence is systematic rather than post-hoc reviews of specific things.
Really, this is much harder than you seem to think.
I’m not sure exactly what you’re referring to, so it’s hard to respond. I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis. At least that way you can debate about the quality of the evidence.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations, not a dedicated cabal who will work together to twist data.
True, and this is generally hard to notice if your a non-expert, it is also hard to tell who is or isn’t an expert if you’re not one. As a result people tend to go with the “official position”.
True, unfortunately what tends to happen in practice is that enough people in the data pipeline manipulate the data for some reason or other that by the time the analysis is finished its correlation with reality is rather tenuous.
These are both risks. But the issue about manipulation at various points is presumably unlikely to add up to systematically misleading results: the involvement of many manipulators here would presumably create a lot of noise.
Not necessarily, one of the manipulators might get lucky and do something that overrides the others.