>I imagine if our goal was “never misclassify an MNIST digit” we could get to 6-7 nines of “worst-case accuracy” even out of existing neural nets, at the cost of saying “I don’t know” for the confusing 0.2% of digits.
Er, how? I haven’t seen anyone describe a way to do this. Getting a neural network to meaningfully say “I don’t know” is very much cutting-edge research as far as I’m aware.
You’re right that it’s an ongoing research area but there’s a number of approaches that work relatively well. This NeurIPS tutorial describes a few. Probably the easiest thing is to use one of the calibration methods mentioned there to get your classifier to output calibrated uncertainties for each class, then say “I don’t know” if the network isn’t at least 90% confident in one of the 10 classes.
OK, thanks for linking that. You’re probably right in the specific example of MNIST. I’m less convinced about more complicated tasks—it seems like each individual task would require a lot of engineering effort.
One thing I didn’t see—is there research which looks at what happens if you give neural nets more of the input space as data? Things which are explicitly out-of-distribution, random noise, abstract shapes, or maybe other modes that you don’t particularly care about performance on, and label it all as “garbage” or whatever. Essentially, providing negative as well as positive examples, given that the input spaces are usually much larger than the intended distribution.
>I imagine if our goal was “never misclassify an MNIST digit” we could get to 6-7 nines of “worst-case accuracy” even out of existing neural nets, at the cost of saying “I don’t know” for the confusing 0.2% of digits.
Er, how? I haven’t seen anyone describe a way to do this. Getting a neural network to meaningfully say “I don’t know” is very much cutting-edge research as far as I’m aware.
You’re right that it’s an ongoing research area but there’s a number of approaches that work relatively well. This NeurIPS tutorial describes a few. Probably the easiest thing is to use one of the calibration methods mentioned there to get your classifier to output calibrated uncertainties for each class, then say “I don’t know” if the network isn’t at least 90% confident in one of the 10 classes.
OK, thanks for linking that. You’re probably right in the specific example of MNIST. I’m less convinced about more complicated tasks—it seems like each individual task would require a lot of engineering effort.
One thing I didn’t see—is there research which looks at what happens if you give neural nets more of the input space as data? Things which are explicitly out-of-distribution, random noise, abstract shapes, or maybe other modes that you don’t particularly care about performance on, and label it all as “garbage” or whatever. Essentially, providing negative as well as positive examples, given that the input spaces are usually much larger than the intended distribution.