That seems intuitively right for unexamined or superficially examined lies, my point was mostly that if the liar is pressed hard enough he’s going to get outcomputed, having much harder problem to solve—constructing self-consistent counter-factual world vs merely verifying the self-consistency.
Interestingly, a large quantity of unexamined lies change the balance—it’s cheap for liars to add new lie to the existing ones but hard for an honest person to determine what is true, the computational complexity shifts away from liars. (We need to assume that getting caught in a lie is a low consequence event and probably bunch of other things I’m forgetting to make this work but I hope the point makes sense)
I’ve heard someone referring to this as Bullshit Asymmetry problem, where refuting low-effort lies (aka bullshit) is harder than generating bullshit.
That seems intuitively right for unexamined or superficially examined lies, my point was mostly that if the liar is pressed hard enough he’s going to get outcomputed, having much harder problem to solve—constructing self-consistent counter-factual world vs merely verifying the self-consistency.
Interestingly, a large quantity of unexamined lies change the balance—it’s cheap for liars to add new lie to the existing ones but hard for an honest person to determine what is true, the computational complexity shifts away from liars. (We need to assume that getting caught in a lie is a low consequence event and probably bunch of other things I’m forgetting to make this work but I hope the point makes sense)
I’ve heard someone referring to this as Bullshit Asymmetry problem, where refuting low-effort lies (aka bullshit) is harder than generating bullshit.