How do you end up with a system that wouldn’t work without false assertions, and yet allegedly “everyone” knows that the assertions are false?
One way this might happen:
Someone designs a process that requires X to happen. (This process might be entirely sensible, at the time.)
This rule is embodied in a necessary component of the process (e.g. it’s coded into software, or it’s one sentence in a large legal document that also serves many other necessary purposes)
Circumstances change so that either the original reason for X no longer applies, or some higher priority trumps the need for X.
People in the field who are trying to keep the process running in the face of changing circumstances decide it is necessary to ignore the rule requiring X to happen, as a triage measure
But the embodied component still prevents proceeding to the next step unless someone attests that X has happened, and the embodiment is harder to change than participants’ behavior, so people feed false inputs into the component
Knowledge of this work-around spreads, possibly until literally every person involved in the process is fully aware of it
Since no clear harm is presently occurring, no one devotes resources to redesigning the component
(You could argue that “the software is being fooled”, but that takes us back to “I don’t think most people would call that fraud”.)
I’m sure there are also many situations where someone is being fooled and “everyone knows” is just a comforting lie.
One way this might happen:
Someone designs a process that requires X to happen. (This process might be entirely sensible, at the time.)
This rule is embodied in a necessary component of the process (e.g. it’s coded into software, or it’s one sentence in a large legal document that also serves many other necessary purposes)
Circumstances change so that either the original reason for X no longer applies, or some higher priority trumps the need for X.
People in the field who are trying to keep the process running in the face of changing circumstances decide it is necessary to ignore the rule requiring X to happen, as a triage measure
But the embodied component still prevents proceeding to the next step unless someone attests that X has happened, and the embodiment is harder to change than participants’ behavior, so people feed false inputs into the component
Knowledge of this work-around spreads, possibly until literally every person involved in the process is fully aware of it
Since no clear harm is presently occurring, no one devotes resources to redesigning the component
(You could argue that “the software is being fooled”, but that takes us back to “I don’t think most people would call that fraud”.)
I’m sure there are also many situations where someone is being fooled and “everyone knows” is just a comforting lie.