The fully general efficiency counterargument does not exactly apply here. It says:
If efficiency falls short, then we must estimate the amount to which it falls short, analyse the implementation, improve incentives, etc… Do you see what’s going on there? The solution to a badly implemented system of measurement, is to add extra complications to the system, to measure even more things, the add more constraints, more boxes to tick.
While obtaining more accurate metrics can sometimes be good, the quest for too much accuracy can lead to the same kind of overfitting as the one I described previously. The solution, once again, is not to keep striving for more accurate metrics, but to start taking uncertainty into account.
To use the educational example again, if you launched into educational reforms knowing absolutely nothing about your students’ performance, then you would most likely waste a bunch of resources, because you’re operating in the dark, and your efficiency is very low. You could build up some metrics, such as standardized tests, that will allow you to estimate the performance of each student; but because these metrics are merely estimates, if you focus solely on optimizing the metrics you will most likely waste a bunch of resources (as per my previous comment).
Yes, you could collect increasingly more detailed metrics. You could begin collecting information about each student’s genetics, diet, real-time GPS coordinates, etc. etc. However, these metrics are not free. The time, money, and CPU cycles you are spending on them could be spent on something else, and that something else would most likely get you a better payoff. In fact, at the extreme end of the spectrum, you could end up in a situation where all of your resources are going toward collecting and analyzing terabytes of data; and none of your resources are actually going toward teaching the students. This is not efficiency, this is waste.
Remember, efficiency is not defined as “measuring everything about your problem domain as accurately as possible”, but rather as something like, “solving as many of your true goals as possible using as few resources as possible, while operating under uncertainty regarding your true goals, your resources, and your performance”.
Hmm, no, I don’t see that. If anything, that section is more of a straw-man. It cautions against excessive obsession with collecting data—which can be a real problem—but it assumes that collecting data is the only thing that efficiency is all about (as opposed to actually achieving your true goals).
The fully general efficiency counterargument does not exactly apply here. It says:
While obtaining more accurate metrics can sometimes be good, the quest for too much accuracy can lead to the same kind of overfitting as the one I described previously. The solution, once again, is not to keep striving for more accurate metrics, but to start taking uncertainty into account.
To use the educational example again, if you launched into educational reforms knowing absolutely nothing about your students’ performance, then you would most likely waste a bunch of resources, because you’re operating in the dark, and your efficiency is very low. You could build up some metrics, such as standardized tests, that will allow you to estimate the performance of each student; but because these metrics are merely estimates, if you focus solely on optimizing the metrics you will most likely waste a bunch of resources (as per my previous comment).
Yes, you could collect increasingly more detailed metrics. You could begin collecting information about each student’s genetics, diet, real-time GPS coordinates, etc. etc. However, these metrics are not free. The time, money, and CPU cycles you are spending on them could be spent on something else, and that something else would most likely get you a better payoff. In fact, at the extreme end of the spectrum, you could end up in a situation where all of your resources are going toward collecting and analyzing terabytes of data; and none of your resources are actually going toward teaching the students. This is not efficiency, this is waste.
Remember, efficiency is not defined as “measuring everything about your problem domain as accurately as possible”, but rather as something like, “solving as many of your true goals as possible using as few resources as possible, while operating under uncertainty regarding your true goals, your resources, and your performance”.
Good ^_^
Did you notice that the section criticising efficiency fully general counterarguments… was itself a fully general counterargument?
Hmm, no, I don’t see that. If anything, that section is more of a straw-man. It cautions against excessive obsession with collecting data—which can be a real problem—but it assumes that collecting data is the only thing that efficiency is all about (as opposed to actually achieving your true goals).