We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training.
Notably, this doesn’t provide an opt out method, and the same messaging is repeated across similar articles/questions. The closest thing to an opt out seems to be “you have the right to request a copy of your data, and object to our usage of it”.
In Anthropic’s support page for “I want to opt out of my prompts and results being used for training” they say:
Notably, this doesn’t provide an opt out method, and the same messaging is repeated across similar articles/questions. The closest thing to an opt out seems to be “you have the right to request a copy of your data, and object to our usage of it”.