I mean, no fair, right? Of course if I am looking for adversarial examples and you are not actively looking to guard against them you are going to be in a lot of trouble.
This provides an angle of attack for people who don’t want their art used by image AIs to make AI art: the hosting service can provide an auto-perturbation feature, or someone could provide a perturbation program that people could run against their own files before uploading them to an online host.
This would work sort of like encryption does, where you’d have to periodically update your perturbation algorithm as the opposition (AI companies) figure out a way to compensate for them.
This provides an angle of attack for people who don’t want their art used by image AIs to make AI art: the hosting service can provide an auto-perturbation feature, or someone could provide a perturbation program that people could run against their own files before uploading them to an online host.
This would work sort of like encryption does, where you’d have to periodically update your perturbation algorithm as the opposition (AI companies) figure out a way to compensate for them.