If X are making claims that everyone knows are false, then there’s no element of deception
“Everyone knows” is an interesting phrase. If literally everyone knew, what would be the function of making the claim? How do you end up with a system that wouldn’t work without false assertions, and yet allegedly “everyone” knows that the assertions are false? It seems more likely that the reason the system wouldn’t work without false assertions, is because someone is actually fooled. If the people who do know are motivated to prevent it from becoming common knowledge, “It’s not deceptive because everyone knows” would be a tempting rationalization for maintaining the status quo.
If literally everyone knew, what would be the function of making the claim? How do you end up with a system that wouldn’t work without false assertions, and yet allegedly “everyone” knows that the assertions are false?
This is answered in Benquo’s last post, take a look at stages 3 and 4 to see how this situation can arise.
I think this conversation might be suffering from ambiguity in the term “knows”; it doesn’t mean the same thing across simulacrum levels. In fact, it’s not clear how someone operating above SL2 can “know” anything in the standard philosophical sense. There’s know-how, and there’s the holding of opinions that lower SL people would agree with, but as a function of social reality, not with real “aboutness” pointing to underlying reality.
This is key. There’s a very weird kind of knowing—somewhere between amnesia and willfully ignoring the problem—when bad data is aggregated into statistics, and those who know that the data is bad decide to rely on the statistics anyway, because it’s the best they have.
This can even be entirely honest. Even if everybody really does have common knowledge that X is a lie, they probably don’t agree on what the actual truth is, and acting as if the known lie X is true can be a compromise position to get stuff done, as long as X isn’t too far away from the truth.
How do you end up with a system that wouldn’t work without false assertions, and yet allegedly “everyone” knows that the assertions are false?
One way this might happen:
Someone designs a process that requires X to happen. (This process might be entirely sensible, at the time.)
This rule is embodied in a necessary component of the process (e.g. it’s coded into software, or it’s one sentence in a large legal document that also serves many other necessary purposes)
Circumstances change so that either the original reason for X no longer applies, or some higher priority trumps the need for X.
People in the field who are trying to keep the process running in the face of changing circumstances decide it is necessary to ignore the rule requiring X to happen, as a triage measure
But the embodied component still prevents proceeding to the next step unless someone attests that X has happened, and the embodiment is harder to change than participants’ behavior, so people feed false inputs into the component
Knowledge of this work-around spreads, possibly until literally every person involved in the process is fully aware of it
Since no clear harm is presently occurring, no one devotes resources to redesigning the component
(You could argue that “the software is being fooled”, but that takes us back to “I don’t think most people would call that fraud”.)
I’m sure there are also many situations where someone is being fooled and “everyone knows” is just a comforting lie.
“Everyone knows” is an interesting phrase. If literally everyone knew, what would be the function of making the claim? How do you end up with a system that wouldn’t work without false assertions, and yet allegedly “everyone” knows that the assertions are false? It seems more likely that the reason the system wouldn’t work without false assertions, is because someone is actually fooled. If the people who do know are motivated to prevent it from becoming common knowledge, “It’s not deceptive because everyone knows” would be a tempting rationalization for maintaining the status quo.
This is answered in Benquo’s last post, take a look at stages 3 and 4 to see how this situation can arise.
https://www.lesswrong.com/posts/fEX7G2N7CtmZQ3eB5/simulacra-and-subjectivity
I think this conversation might be suffering from ambiguity in the term “knows”; it doesn’t mean the same thing across simulacrum levels. In fact, it’s not clear how someone operating above SL2 can “know” anything in the standard philosophical sense. There’s know-how, and there’s the holding of opinions that lower SL people would agree with, but as a function of social reality, not with real “aboutness” pointing to underlying reality.
This is key. There’s a very weird kind of knowing—somewhere between amnesia and willfully ignoring the problem—when bad data is aggregated into statistics, and those who know that the data is bad decide to rely on the statistics anyway, because it’s the best they have.
This can even be entirely honest. Even if everybody really does have common knowledge that X is a lie, they probably don’t agree on what the actual truth is, and acting as if the known lie X is true can be a compromise position to get stuff done, as long as X isn’t too far away from the truth.
One way this might happen:
Someone designs a process that requires X to happen. (This process might be entirely sensible, at the time.)
This rule is embodied in a necessary component of the process (e.g. it’s coded into software, or it’s one sentence in a large legal document that also serves many other necessary purposes)
Circumstances change so that either the original reason for X no longer applies, or some higher priority trumps the need for X.
People in the field who are trying to keep the process running in the face of changing circumstances decide it is necessary to ignore the rule requiring X to happen, as a triage measure
But the embodied component still prevents proceeding to the next step unless someone attests that X has happened, and the embodiment is harder to change than participants’ behavior, so people feed false inputs into the component
Knowledge of this work-around spreads, possibly until literally every person involved in the process is fully aware of it
Since no clear harm is presently occurring, no one devotes resources to redesigning the component
(You could argue that “the software is being fooled”, but that takes us back to “I don’t think most people would call that fraud”.)
I’m sure there are also many situations where someone is being fooled and “everyone knows” is just a comforting lie.