This is kinda trivial but for some reason seems profound to me: world (or reality or whatever you want to call it) is self-consistent.
If someone telling the truth, it’s computationally cheap for them—it’s just reporting events. If someone’s lying, each probing question requires them to infer the consequences of their made up events. And there’s a lot of them. What’s worse, all it takes for the lie to fall apart is a single inconsistency!
There’s a point somewhere about memory being imperfect etc but the liar also have to know when to say “I don’t remember” in a way which is consistent with what they said previously and so on. I think the main point still stands, whatever the point is.
Hardness of lying seems connected to the impossibility of counterfactual words—you cannot take a state of the world at one point in time, modify it arbitrarily and press play—the state from now on will be inconsistent with the historical states.
It turns out, though, that most human descriptions of events have a whole bucketload of possible quantum configurations that would fit, and it’s very hard to tell if some correlated events happened. So lies are rampant and usually go un-caught, even if superficially examined.
That seems intuitively right for unexamined or superficially examined lies, my point was mostly that if the liar is pressed hard enough he’s going to get outcomputed, having much harder problem to solve—constructing self-consistent counter-factual world vs merely verifying the self-consistency.
Interestingly, a large quantity of unexamined lies change the balance—it’s cheap for liars to add new lie to the existing ones but hard for an honest person to determine what is true, the computational complexity shifts away from liars. (We need to assume that getting caught in a lie is a low consequence event and probably bunch of other things I’m forgetting to make this work but I hope the point makes sense)
I’ve heard someone referring to this as Bullshit Asymmetry problem, where refuting low-effort lies (aka bullshit) is harder than generating bullshit.
This is not entirely true. Reality contradicts itself on abstract levels. That which can be destroyed by the truth might also be abstractly true. Truths which destroy other truths may turn out to be more abstract than intuitively anticipated.
This is kinda trivial but for some reason seems profound to me: world (or reality or whatever you want to call it) is self-consistent.
If someone telling the truth, it’s computationally cheap for them—it’s just reporting events. If someone’s lying, each probing question requires them to infer the consequences of their made up events. And there’s a lot of them. What’s worse, all it takes for the lie to fall apart is a single inconsistency!
There’s a point somewhere about memory being imperfect etc but the liar also have to know when to say “I don’t remember” in a way which is consistent with what they said previously and so on. I think the main point still stands, whatever the point is.
Hardness of lying seems connected to the impossibility of counterfactual words—you cannot take a state of the world at one point in time, modify it arbitrarily and press play—the state from now on will be inconsistent with the historical states.
yup, see also https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies
It turns out, though, that most human descriptions of events have a whole bucketload of possible quantum configurations that would fit, and it’s very hard to tell if some correlated events happened. So lies are rampant and usually go un-caught, even if superficially examined.
That seems intuitively right for unexamined or superficially examined lies, my point was mostly that if the liar is pressed hard enough he’s going to get outcomputed, having much harder problem to solve—constructing self-consistent counter-factual world vs merely verifying the self-consistency.
Interestingly, a large quantity of unexamined lies change the balance—it’s cheap for liars to add new lie to the existing ones but hard for an honest person to determine what is true, the computational complexity shifts away from liars. (We need to assume that getting caught in a lie is a low consequence event and probably bunch of other things I’m forgetting to make this work but I hope the point makes sense)
I’ve heard someone referring to this as Bullshit Asymmetry problem, where refuting low-effort lies (aka bullshit) is harder than generating bullshit.
This is not entirely true. Reality contradicts itself on abstract levels. That which can be destroyed by the truth might also be abstractly true. Truths which destroy other truths may turn out to be more abstract than intuitively anticipated.