So it’s about how adversarial inputs can produce maximally wrong answers? Wouldn’t the best policy in that case just be to ignore adversarial inputs and rely entirely on your priors?
So it’s about how adversarial inputs can produce maximally wrong answers? Wouldn’t the best policy in that case just be to ignore adversarial inputs and rely entirely on your priors?