“Adversarial Spheres” (Gilmer et al. 2017) - “For this dataset we show a fundamental tradeoff between the amount of test error and the average distance to nearest error. In particular, we prove that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size O(1/√d).” (emphasis mine)
Slightly off-topic, but quick terminology question. When I first read the abstract of this paper, I was very confused about what it was saying and had to re-read it several times, because of the way the word “tradeoff” was used.
I usually think of a tradeoff as a inverse relationship between two good things that you want both of. But in this case they use “tradeoff” to refer to the inverse relationship between “test error”, and “average distance to nearest error”. Which is odd, because the first of those is bad and the second is good, no?
Is there something I’m missing that causes this to sound like a more natural way of describing things to others’ ears?
Slightly off-topic, but quick terminology question. When I first read the abstract of this paper, I was very confused about what it was saying and had to re-read it several times, because of the way the word “tradeoff” was used.
I usually think of a tradeoff as a inverse relationship between two good things that you want both of. But in this case they use “tradeoff” to refer to the inverse relationship between “test error”, and “average distance to nearest error”. Which is odd, because the first of those is bad and the second is good, no?
Is there something I’m missing that causes this to sound like a more natural way of describing things to others’ ears?