Bounding worst mistake: preventing adversarial examples and generalization failures. Plenty of work on this in general, but in particular I’m interested in certified bounds. (Though those usually turn out to have some sort of unhelpfully tight premise.)
tons of papers I could link here that I haven’t evaluated deeply, but you can find a lot of them by following citations from https://www.katz-lab.com/research—in particular:
Bounding worst mistake: preventing adversarial examples and generalization failures. Plenty of work on this in general, but in particular I’m interested in certified bounds. (Though those usually turn out to have some sort of unhelpfully tight premise.)
tons of papers I could link here that I haven’t evaluated deeply, but you can find a lot of them by following citations from https://www.katz-lab.com/research—in particular:
Verifying Generalization in Deep Learning
gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness
here’s what’s on my to-evaluate list in my
ai formal verification and hard robustness
tag in semanticscholar: https://arxiv.org/pdf/2302.04025.pdf https://arxiv.org/pdf/2304.03671.pdf https://arxiv.org/pdf/2303.10513.pdf https://arxiv.org/pdf/2303.03339.pdf https://arxiv.org/pdf/2303.01076.pdf https://arxiv.org/pdf/2303.14564.pdf https://arxiv.org/pdf/2303.07917.pdf https://arxiv.org/pdf/2304.01218.pdf https://arxiv.org/pdf/2304.01826.pdf https://arxiv.org/pdf/2304.00813.pdf https://arxiv.org/pdf/2304.01874.pdf https://arxiv.org/pdf/2304.03496.pdf https://arxiv.org/pdf/2303.02251.pdf https://arxiv.org/pdf/2303.14961.pdf https://arxiv.org/pdf/2301.11374.pdf https://arxiv.org/pdf/2303.10024.pdf—most of these are probably not that amazing, but some of them seem quite interesting. would love to hear which stand out to anyone passing by!