I don’t see how the brain would perceive the type of pixel-level adversarial perturbations most of us think of (e.g.: https://openai.com/blog/adversarial-example-research/) as anything other than noise, if it even reaches past the threshold of perception at all.
Look through https://www.gwern.net/docs/ai/adversarial/index The theoretical work is the isoperimetry paper: https://arxiv.org/abs/2105.12806
Here is a paper showing that humans can classify pixel-level adversarial examples that look like noise at better than chance levels, see Experiment 4 (and also #5-6): https://www.nature.com/articles/s41467-019-08931-6
Thanks for the links!