I will actually clean this up and into a post sometime soon [edit: I retract that, I am not able to make commitments like this right now]. For now let me add another quick hypothesis on this topic whilst crashing from jet lag.
A friend of mine proposed that instead of saying ‘lies’ I could say ‘falsehoods’. Not “that claim is a lie” but “that claim is false”.
I responded that ‘falsehood’ doesn’t capture the fact that you should expect systematic deviations from the truth. I’m not saying this particular parapsychology claim is false. I’m saying it is false in a way where you should no longer trust the other claims, and expect they’ve been optimised to be persuasive.
They gave another proposal, which is to say instead of “they’re lying” to say “they’re not truth-tracking”. Suggest that their reasoning process (perhaps in one particular domain) does not track truth.
I responded that while this was better, it still seems to me that people won’t have an informal understanding of how to use this information. (Are you saying that the ideas aren’t especially well-evidenced? But they sound pretty plausible to me, so let’s keep discussing them and look for more evidence.) There’s a thing where if you say someone is a liar, not only do you not trust them, but you recognise that you shouldn’t even privilege the hypotheses that they produce. If there’s no strong evidence either way, if it turns out the person who told it you is a rotten liar, then if you wouldn’t have considered it before they raised it, don’t consider it now.
Then I realised Jacob had written about this topic a few months back. People talk as though ‘responding to economic incentives’ requires conscious motivation, but actually there are lots of ways that incentives cause things to happen that don’t require humans consciously noticing the incentives and deliberately changing their behaviour. Selection effects, reinforcement learning, and memetic evolution.
Similarly, what I’m looking for is basic terminology for pointing to processes that systematically produce persuasive things that aren’t true, that doesn’t move through “this person is consciously deceiving me”. The scientists pushing adult neurogenesis aren’t lying. There’s a different force happening here that we need to learn to give epistemic weight to the same way we treat a liar, but without expecting conscious motivation to be the root of the force and thus trying to treat it that way (e.g. by social punishment).
More broadly, it seems like there are persuasive systems in the environment that weren’t in the evolutionary environment for adaptation, that we have not collectively learned to model clearly. Perhaps we should invest in some basic terminology that points to these systems so we can learn to not-trust them without bringing in social punishment norms.
I will actually clean this up and into a post sometime soon [edit: I retract that, I am not able to make commitments like this right now]. For now let me add another quick hypothesis on this topic whilst crashing from jet lag.
A friend of mine proposed that instead of saying ‘lies’ I could say ‘falsehoods’. Not “that claim is a lie” but “that claim is false”.
I responded that ‘falsehood’ doesn’t capture the fact that you should expect systematic deviations from the truth. I’m not saying this particular parapsychology claim is false. I’m saying it is false in a way where you should no longer trust the other claims, and expect they’ve been optimised to be persuasive.
They gave another proposal, which is to say instead of “they’re lying” to say “they’re not truth-tracking”. Suggest that their reasoning process (perhaps in one particular domain) does not track truth.
I responded that while this was better, it still seems to me that people won’t have an informal understanding of how to use this information. (Are you saying that the ideas aren’t especially well-evidenced? But they sound pretty plausible to me, so let’s keep discussing them and look for more evidence.) There’s a thing where if you say someone is a liar, not only do you not trust them, but you recognise that you shouldn’t even privilege the hypotheses that they produce. If there’s no strong evidence either way, if it turns out the person who told it you is a rotten liar, then if you wouldn’t have considered it before they raised it, don’t consider it now.
Then I realised Jacob had written about this topic a few months back. People talk as though ‘responding to economic incentives’ requires conscious motivation, but actually there are lots of ways that incentives cause things to happen that don’t require humans consciously noticing the incentives and deliberately changing their behaviour. Selection effects, reinforcement learning, and memetic evolution.
Similarly, what I’m looking for is basic terminology for pointing to processes that systematically produce persuasive things that aren’t true, that doesn’t move through “this person is consciously deceiving me”. The scientists pushing adult neurogenesis aren’t lying. There’s a different force happening here that we need to learn to give epistemic weight to the same way we treat a liar, but without expecting conscious motivation to be the root of the force and thus trying to treat it that way (e.g. by social punishment).
More broadly, it seems like there are persuasive systems in the environment that weren’t in the evolutionary environment for adaptation, that we have not collectively learned to model clearly. Perhaps we should invest in some basic terminology that points to these systems so we can learn to not-trust them without bringing in social punishment norms.
Is this “bias”?
Yeah good point I may have reinvented the wheel. I have a sense that’s not true but need to think more.