That’s fair. For a more concrete example, see the immortal Scott Alexander’s recent post “Against Lie Inflation” (itself a reply to discussion with Jessica Taylor on her Less Wrong post “The AI Timelines Scam”). Alexander argues:
The word “lie” is useful because some statements are lies and others aren’t. [...] The rebranding of lying is basically a parasitic process, exploiting the trust we have in a functioning piece of language until it’s lost all meaning[.]
I read Alexander as making essentially the same point as “10.” in the grandparent, with G = “honest reports of unconsciously biased beliefs (about AI timelines)” and H = “lying”.
That’s fair. For a more concrete example, see the immortal Scott Alexander’s recent post “Against Lie Inflation” (itself a reply to discussion with Jessica Taylor on her Less Wrong post “The AI Timelines Scam”). Alexander argues:
I read Alexander as making essentially the same point as “10.” in the grandparent, with G = “honest reports of unconsciously biased beliefs (about AI timelines)” and H = “lying”.
Note that it’s a central example if you’re doing agent-based modeling, as Michael points out.