But the reddit post (as of right now) guesses that it might not be specifically about GDPR requests per se, but rather more generally “It’s a last resort fallback for preventing misinformation in situations where a significant threat of legal action is present”.
ChatGPT’s developer, OpenAI, has provided some clarity on the situation by stating that the Mayer issue was due to a system glitch. “One of our tools mistakenly flagged this name and prevented it from appearing in responses, which it shouldn’t have. We’re working on a fix,” said an OpenAI spokesperson
...OpenAI’s Europe privacy policy makes clear that users can delete their personal data from its products, in a process also known as the “right to be forgotten”, where someone removes personal information from the internet.
OpenAI declined to comment on whether the “Mayer” glitch was related to a right to be forgotten procedure.
Good example of the redactor’s dilemma and the need for Glomarizing: by confirming that they have a tool to flag names and hide them, and then by neither confirming or denying that this was related to a right-to-be-forgotten order (a meta-gag), they confirm that it’s a right-to-be-forgotten bug.
Similar to when OA people were refusing to confirm or deny signing OA NDAs which forbade them from discussing whether they had signed an OA NDA… That was all the evidence you needed to know that there was a meta-gag order (as was eventually confirmed more directly).
I don’t think it’s necessarily GDPR-related but the names Brian Hood and Jonathan Turley make sense from a legal liability perspective. According to info via ArsTechnica,
Why these names?
We first discovered that ChatGPT choked on the name “Brian Hood” in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood’s 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
As for Jonathan Turley, a George Washington University Law School professor and Fox News contributor, 404 Media notes that he wrote about ChatGPT’s earlier mishandling of his name in April 2023. The model had fabricated false claims about him, including a non-existent sexual harassment scandal that cited a Washington Post article that never existed. Turley told 404 Media he has not filed lawsuits against OpenAI and said the company never contacted him about the issue.
There’s a theory (twitter citing reddit) that at least one of these people filed GDPR right to be forgotten requests. So one hypothesis would be: all of those people filed such GDPR requests.
But the reddit post (as of right now) guesses that it might not be specifically about GDPR requests per se, but rather more generally “It’s a last resort fallback for preventing misinformation in situations where a significant threat of legal action is present”.
OA has indirectly confirmed it is a right-to-be-forgotten thing in https://www.theguardian.com/technology/2024/dec/03/chatgpts-refusal-to-acknowledge-david-mayer-down-to-glitch-says-openai
Good example of the redactor’s dilemma and the need for Glomarizing: by confirming that they have a tool to flag names and hide them, and then by neither confirming or denying that this was related to a right-to-be-forgotten order (a meta-gag), they confirm that it’s a right-to-be-forgotten bug.
Similar to when OA people were refusing to confirm or deny signing OA NDAs which forbade them from discussing whether they had signed an OA NDA… That was all the evidence you needed to know that there was a meta-gag order (as was eventually confirmed more directly).
I don’t think it’s necessarily GDPR-related but the names Brian Hood and Jonathan Turley make sense from a legal liability perspective. According to info via ArsTechnica,
Interestingly, Jonathan Zittrain is on record saying the Right to be Forgotten is a “bad solution to a real problem” because “the incentives are clearly lopsided [towards removal]”.
User throwayian on Hacker News ponders an interesting abuse of this sort of censorship: