I intended to mean something similar to what Ajeya meant in her report:
I’ll define the “effective horizon length” of an ML problem as the amount of data it takes (on average) to tell whether a perturbation to the model improves performance or worsens performance. If we believe that the number of “samples” required to train a model of size P is given by KP, then the number of subjective seconds that would be required should be given by HKP, where H is the effective horizon length expressed in units of “subjective seconds per sample.”
To be clear, I’m still a bit confused about the concept of horizon length. I’m not sure it’s a good idea to think about things this way. But it seems reasonable enough for now.
I intended to mean something similar to what Ajeya meant in her report:
To be clear, I’m still a bit confused about the concept of horizon length. I’m not sure it’s a good idea to think about things this way. But it seems reasonable enough for now.
I’ve been working on a draft blog post kinda related to that, if you’re interested in I can DM you a link, it could use a second pair of eyes.
Sure!