This post is motivated by a popular conflation in the context of AI generation (such as the recent WSJ article about detecting “cheating”) but this is applicable more generally.
There are many ways of marking ownership/provenance of a particular work, text, images, or anything else, and they serve different purposes.
If you have no adversaries working against you, say, if you’re an artist whose primary goal is wanting people seeing your paintings to be able to get to your website to find more of your work, simply adding your name might be enough. An AI-related example of this kind of thing might be adding prompt information to the metadata of your image file.
But if you do, there are two dimensions in which one might wish to make changes: making such a mark harder to remove (indelibility) and making it harder to detect (invisibility). These are independently useful goals, and many techniques for achieving one of these are subjected to misplaced criticism for missing targets they aren’t even aiming at.
(These are both also distinct from anti-counterfeiting measures, where the goal is to make marks that are hard to copy.)
I propose the narrower term “boobytrapping” for when the goal is make invisible[1] marks so you can punish people for using/distributing the works they’re applied to for stealing/cheating/leaking etc. after the fact.
Boobytrapping can be thought of as a kind of steganography, where the content of the message is just that it exists. A famous example of such a boobytrap would be paper towns, and these serve their purpose despite being trivial to remove upon detection.
For many such uses, much of the strength of a boobytrap is from secrecy of the very fact that it exists, let alone the details of the technique used, so much so that one can often get a lot of the benefit just from a widespread belief that it exists. The indelibility of the marks once they’re detected might not be worth trading off invisibility for.
Similarly, if you want indelible[1] marks to prove authorship robust to edits, which may perhaps be called “branding,” you might not care if the brands are detectible as long as they don’t degrade the quality of the work.
I think this paper, Embarrassingly Simple Text Watermarks [“Boobytraps” in my proposed terminology] makes the point I do here, cleverly veiled, but it could also be a result of a lack of clarity between the two goals of watermarking I distinguish.
Watermarks: Signing, Branding, and Boobytrapping
This post is motivated by a popular conflation in the context of AI generation (such as the recent WSJ article about detecting “cheating”) but this is applicable more generally.
There are many ways of marking ownership/provenance of a particular work, text, images, or anything else, and they serve different purposes.
If you have no adversaries working against you, say, if you’re an artist whose primary goal is wanting people seeing your paintings to be able to get to your website to find more of your work, simply adding your name might be enough. An AI-related example of this kind of thing might be adding prompt information to the metadata of your image file.
But if you do, there are two dimensions in which one might wish to make changes: making such a mark harder to remove (indelibility) and making it harder to detect (invisibility). These are independently useful goals, and many techniques for achieving one of these are subjected to misplaced criticism for missing targets they aren’t even aiming at.
(These are both also distinct from anti-counterfeiting measures, where the goal is to make marks that are hard to copy.)
I propose the narrower term “boobytrapping” for when the goal is make invisible[1] marks so you can punish people for using/distributing the works they’re applied to for stealing/cheating/leaking etc. after the fact.
Boobytrapping can be thought of as a kind of steganography, where the content of the message is just that it exists. A famous example of such a boobytrap would be paper towns, and these serve their purpose despite being trivial to remove upon detection.
For many such uses, much of the strength of a boobytrap is from secrecy of the very fact that it exists, let alone the details of the technique used, so much so that one can often get a lot of the benefit just from a widespread belief that it exists. The indelibility of the marks once they’re detected might not be worth trading off invisibility for.
Similarly, if you want indelible[1] marks to prove authorship robust to edits, which may perhaps be called “branding,” you might not care if the brands are detectible as long as they don’t degrade the quality of the work.
I think this paper, Embarrassingly Simple Text Watermarks [“Boobytraps” in my proposed terminology] makes the point I do here, cleverly veiled, but it could also be a result of a lack of clarity between the two goals of watermarking I distinguish.
Not to be taken to mean literally impossible to detect or remove.