The variables that had high information values were routinely those that the client had never measured… * The variables that clients [spent] the most time measuring were usually those with a very low (even zero) information value…
This seems very unlikely to be a coincidence. Any theories about what’s going on?
My usual interpretation is that actual measurements with high information value can destabilize the existing system (e.g., by demonstrating that people aren’t doing their jobs, or that existing strategies aren’t working or are counterproductive), and are therefore dangerous. Low-information measurements are safer.
It’s not that they’re measuring the wrong variables, it’s most likely that those organizations have already made the decisions based on variables they already measure. In the “Function Points” example, I would bet there were a few obvious learnings early on that spread throughout the organizations, and once the culture had changed any further effort didn’t help at all.
Another example: I took statistics on how my friends played games that involved bidding, such as Liar’s Poker. I found that they typically would bid too much. Therefore a measurement of how many times someone had the winning bid was a high predictor of how they would perform in the game-people who bid high would typically lose.
Once I shared this information, behavior changed and people started using a much more rational bidding scheme. And the old measurement of “how often someone bid high” was no longer very predictive. It simply meant that they’d had more opportunities where bidding high made sense. Other variables such as “the player to your left” started becoming much more predictive.
One possibility is that there are a very large number of things they could measure, most of which have low information value. If they chose randomly we might expect to see an effect like this, and never notice all the low information possibilities they chose not to measure.
I’m not suggesting that they actually do choose randomly, but it might be they chose, say, the easiest to measure, and that these are neither systematically good or bad, so it looks similar to random in terms of the useful information.
in the many cases I’ve seen this its because (generally) things that are being collected are those things which are easiest to be collected. Often little thought was put into it, and sometimes these things were collected by accident. Generally, those things easiest to be collected offer the least insight (if its easy to collect, its already part of your existing business process).
If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you’ve already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.
Further, if they hadn’t even thought of some measurements they couldn’t have pursued them, so they wouldn’t have suffered any declining returns.
I don’t think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.
This seems very unlikely to be a coincidence. Any theories about what’s going on?
We run into this all the time at my job.
My usual interpretation is that actual measurements with high information value can destabilize the existing system (e.g., by demonstrating that people aren’t doing their jobs, or that existing strategies aren’t working or are counterproductive), and are therefore dangerous. Low-information measurements are safer.
It’s not that they’re measuring the wrong variables, it’s most likely that those organizations have already made the decisions based on variables they already measure. In the “Function Points” example, I would bet there were a few obvious learnings early on that spread throughout the organizations, and once the culture had changed any further effort didn’t help at all.
Another example: I took statistics on how my friends played games that involved bidding, such as Liar’s Poker. I found that they typically would bid too much. Therefore a measurement of how many times someone had the winning bid was a high predictor of how they would perform in the game-people who bid high would typically lose.
Once I shared this information, behavior changed and people started using a much more rational bidding scheme. And the old measurement of “how often someone bid high” was no longer very predictive. It simply meant that they’d had more opportunities where bidding high made sense. Other variables such as “the player to your left” started becoming much more predictive.
One possibility is that there are a very large number of things they could measure, most of which have low information value. If they chose randomly we might expect to see an effect like this, and never notice all the low information possibilities they chose not to measure.
I’m not suggesting that they actually do choose randomly, but it might be they chose, say, the easiest to measure, and that these are neither systematically good or bad, so it looks similar to random in terms of the useful information.
in the many cases I’ve seen this its because (generally) things that are being collected are those things which are easiest to be collected. Often little thought was put into it, and sometimes these things were collected by accident. Generally, those things easiest to be collected offer the least insight (if its easy to collect, its already part of your existing business process).
If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you’ve already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.
Further, if they hadn’t even thought of some measurements they couldn’t have pursued them, so they wouldn’t have suffered any declining returns.
I don’t think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.