I think it’s weird that saying a sentence with a falsehood that doesn’t change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before.
This feels especially weird when the “lie” is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.
I’ve always thought it was weird that logic traditionally considers a list of statements concatenated with “and’s” where at least one statement in the list is false as the entire list being one false statement. This doesn’t seem to completely match intuition, at least the way I’d like it to. If I’ve been told N things, and N-1 of those things are true, it seems like I’ve probably gained something, even if I am not entirely sure which one out of the N statements is the false one.
I think the consideration makes sense because “lies are bad” is a much simpler norm than “lies are bad if they reduce the informational usefulness of the sentence below 0″. The latter is so complex that if it were the accepted norm, it’d probably be so difficult to enforce and so open to debate that it’d lose its usefulness.
Do you have any examples in mind? I’m having a hard time thinking about this without something concrete and am having trouble thinking of an example myself.
I’m surprised that you find this weird. Beliefs are multi-dimensional and extremely complicated—it’s almost trivial to construct cases where a loss in accuracy on one dimension paired with a gain on another is a net improvement.
I think it’s weird that saying a sentence with a falsehood that doesn’t change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before.
This feels especially weird when the “lie” is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.
I’ve always thought it was weird that logic traditionally considers a list of statements concatenated with “and’s” where at least one statement in the list is false as the entire list being one false statement. This doesn’t seem to completely match intuition, at least the way I’d like it to. If I’ve been told N things, and N-1 of those things are true, it seems like I’ve probably gained something, even if I am not entirely sure which one out of the N statements is the false one.
I think the consideration makes sense because “lies are bad” is a much simpler norm than “lies are bad if they reduce the informational usefulness of the sentence below 0″. The latter is so complex that if it were the accepted norm, it’d probably be so difficult to enforce and so open to debate that it’d lose its usefulness.
Do you have any examples in mind? I’m having a hard time thinking about this without something concrete and am having trouble thinking of an example myself.
I’m surprised that you find this weird. Beliefs are multi-dimensional and extremely complicated—it’s almost trivial to construct cases where a loss in accuracy on one dimension paired with a gain on another is a net improvement.