A third way is “sparse” networks—many of the weights end up being near zero, and you can simply neglect those, but you need your hardware to support sparse matrix convolution.
All of these methods have the tradeoff of a small decrease in accuracy for a large decrease in required compute.
And my point about “solvability” is that there is a certain amount of noise—entropy—in the images, such that a perfect classifier trained only on the image set, with infinite compute and the global maximumally performing model, still cannot reach 100%. As the finite set doesn’t have enough information. (and no, you cannot deduce the ‘seed’ of our universe and play forward until that moment as you do not have enough information to do that, even with infinite compute, at least if your only information input is the image set. You would find too many other universes that match the conditions. Human beings trying to manually solve the image aren’t a fair comparison because they are bringing in outside information that wasn’t in the set)
So there is some true ceiling for any regression problem, and you would actually expect that a ‘good’ modern method might be acceptably close to the ceiling, or get there soon. (if the ‘true ceiling’ is 97% accuracy a model that is 95% is good enough for engineering purposes)
Or a simple example : for a mostly fair coin, you cannot infer the future outcome of a flip better than the bias of the coin itself.
Two methods I have personally used:
quantization to int-8
model compression.
A third way is “sparse” networks—many of the weights end up being near zero, and you can simply neglect those, but you need your hardware to support sparse matrix convolution.
All of these methods have the tradeoff of a small decrease in accuracy for a large decrease in required compute.
And my point about “solvability” is that there is a certain amount of noise—entropy—in the images, such that a perfect classifier trained only on the image set, with infinite compute and the global maximumally performing model, still cannot reach 100%. As the finite set doesn’t have enough information. (and no, you cannot deduce the ‘seed’ of our universe and play forward until that moment as you do not have enough information to do that, even with infinite compute, at least if your only information input is the image set. You would find too many other universes that match the conditions. Human beings trying to manually solve the image aren’t a fair comparison because they are bringing in outside information that wasn’t in the set)
So there is some true ceiling for any regression problem, and you would actually expect that a ‘good’ modern method might be acceptably close to the ceiling, or get there soon. (if the ‘true ceiling’ is 97% accuracy a model that is 95% is good enough for engineering purposes)
Or a simple example : for a mostly fair coin, you cannot infer the future outcome of a flip better than the bias of the coin itself.