About “target”/“hitting the target”, maybe we should stick with Dario’s labels “training target” and “robustness”? He recently complained about people inventing new names for existing concepts, thereby causing confusion, and this seems to be a case where that’s pretty easy to avoid?
Except that “hitting the target” is much more evocative than “robustness” :/
Maybe “robustness (aka hitting the target)” would be okay to write.
I do agree. I think the main reason to stick with “robustness” or “reliability” is that that’s how the problems of “my model doesn’t generalize well / is subject to adversarial examples / didn’t really hit the training target outside the training data” are referred to in ML, and it gives a bad impression when people rename problems. I’m definitely most in favor of giving a new name like “hitting the target” if we think the problem we care about is different in a substantial way (which could definitely happen going forward!)
“Target” and “hitting the target” seem to align very well with the terms “accuracy” and “precision”, so much so that almost all images of accuracy and precision feature a target. Maybe we could use terms like “Value Accuracy” and “Value Precision”, to be a bit more specific?
I think of “robustness” and “reliability” as aspects of “precision”, but it’s not the only ones. To me those two imply “improving on an already high chance of success”, rather than the challenge of getting anywhere close to begin with, without specific external constraints. The reason why they apply to rockets or things with chances of failure is that they are typically discussed more specifically to all the individual parts of such items. Another case of non-robust precision may be an AI trying to understand some of the much finer details of what’s considered “the human value function.”
[Edit] I don’t mean to proliferate the “new terms” problem, and don’t intend for these specific terms to get used in the future, but use them for demonstration.
Except that “hitting the target” is much more evocative than “robustness” :/
Maybe “robustness (aka hitting the target)” would be okay to write.
I do agree. I think the main reason to stick with “robustness” or “reliability” is that that’s how the problems of “my model doesn’t generalize well / is subject to adversarial examples / didn’t really hit the training target outside the training data” are referred to in ML, and it gives a bad impression when people rename problems. I’m definitely most in favor of giving a new name like “hitting the target” if we think the problem we care about is different in a substantial way (which could definitely happen going forward!)
“Target” and “hitting the target” seem to align very well with the terms “accuracy” and “precision”, so much so that almost all images of accuracy and precision feature a target. Maybe we could use terms like “Value Accuracy” and “Value Precision”, to be a bit more specific?
I think of “robustness” and “reliability” as aspects of “precision”, but it’s not the only ones. To me those two imply “improving on an already high chance of success”, rather than the challenge of getting anywhere close to begin with, without specific external constraints. The reason why they apply to rockets or things with chances of failure is that they are typically discussed more specifically to all the individual parts of such items. Another case of non-robust precision may be an AI trying to understand some of the much finer details of what’s considered “the human value function.”
[Edit] I don’t mean to proliferate the “new terms” problem, and don’t intend for these specific terms to get used in the future, but use them for demonstration.