I find absolutely the opposite rule of thumb to be way more helpful:
If someone seems to be talking nonsense, then I haven’t understood their POV yet.
…because “they’re talking nonsense” isn’t a hypothesis about them or the world. It’s a restatement of how I’m perceiving them. Conflating those two is exactly map/territory confusion that can leave me feeling smarter precisely because I’m being stupid.
I first grokked this in math education. When a student writes “24 – 16 = 12”, lots of teachers say that the student “wasn’t thinking” or “wasn’t paying attention”. They completely fail to notice that they’re just restating the fact that the student got the answer wrong. They’re not saying anything at all about why the student got the problem wrong.
…and some will double down: “Billy just doesn’t try very hard.”
A far, far more helpful hypothesis is that the student is doing placewise subtraction of the larger number from the smaller number. This lets you make predictions about their behavior, like that they’ll conclude that 36 – 29 should be 13. Then you can run experiments to test your model of their cognition.
It’s quite shocking & confusing when the student writes “23” for that last problem. Seriously, I had a student who did this once. It’s a case study I used to give as a puzzle in the opening CFAR class. It turns out they’re following a very consistent rule! But most bright people have trouble seeing what it is even after twenty examples.
It might not be worth my time and/or energy to dive into someone’s psyche this way. But it seems super duper important as a matter of lucid map/territory clarity to remember that I could and that what they’re doing makes sense on the inside.
All of which is to say:
In practice I don’t think there’s such a thing as someone “actually just talking nonsense”.
That’s an interesting observation! I’ve had something like this experience when teaching programming, going from trying to explain my approach to “oh I see, you have a different approach and it makes sense too”. Or just getting some partial understanding of what mental algorithm the student is executing and in with ways it fails to match with reality.
what they’re doing makes sense on the inside
I am wary of trying to understanding people too hard. A lot of things people say is a network running in reverse, a bullshit generator for whatever conclusion they already decided on, whatever outcome that’s socially convenient to them. Sure, it makes sense to them on the inside—but soon (if it’s more convenient to them) they’ll believe a slightly different, or a very different thing, without ever noticing they changed their mind.
I suppose the true-understanding here would be to notice the parts of their belief systems are more “real”/invariant, the parts are that are unreal/contradictory, and the invisible presence in the background that is pressuring the belief system into whatever shape it is at the moment.
So like I agree that “understanding what process is generating that output” is a better state to be in than “not understanding what process is generating that output”. But I don’t think “nonsense” is entirely in the map; consider
This is a bit of a thing that I don’t know if you have any questions or need to be a good time to time to time
You might be able to guess what process generated that string of words. But I wouldn’t say “that process isn’t generating nonsense, it’s just outputting the max-probability result from such-and-such statistical model”. Rather, I’d say “that process is generating nonsense because it’s just outputting....”
This leaves open a bunch of questions like
Can we come up with a sensible definition of “nonsense”?
Given such a definition, was Lacan talking nonsense?
In fact, is talking nonsense a thing humans basically ever do in practice?
Do some people have a tendency to too-quickly assume nonsense, when in fact they simply don’t understand some not-nonsense? (How many such people? What are some factors that tend to predict them making this mistake?)
Do some people have the opposite tendency? (How many? What are some factors?)
I think your framing hides those questions without answering them.
As a tangent, it might be that Billy did piecewise subtraction in this instance partly because he wasn’t paying attention. (Where “because” is imprecise but I don’t feel like unpacking right now.) “Billy wasn’t paying attention” is a claim about the territory that makes predictions about what Billy will do in future, which have partial but not total overlap with the predictions made by the claim “Billy did piecewise subtraction”. (Some people who do piecewise subtraction will do it regularly, and some will do it only when not paying attention.)
Of course, if you’re going to stop at “not paying attention” and not notice the “piecewise subtraction” thing, that seems bad. And if you’re going to notice that Billy got the problem wrong and say “not paying attention” without checking whether Billy was paying attention, that seems bad too.
I find absolutely the opposite rule of thumb to be way more helpful:
If someone seems to be talking nonsense, then I haven’t understood their POV yet.
…because “they’re talking nonsense” isn’t a hypothesis about them or the world. It’s a restatement of how I’m perceiving them. Conflating those two is exactly map/territory confusion that can leave me feeling smarter precisely because I’m being stupid.
I first grokked this in math education. When a student writes “24 – 16 = 12”, lots of teachers say that the student “wasn’t thinking” or “wasn’t paying attention”. They completely fail to notice that they’re just restating the fact that the student got the answer wrong. They’re not saying anything at all about why the student got the problem wrong.
…and some will double down: “Billy just doesn’t try very hard.”
A far, far more helpful hypothesis is that the student is doing placewise subtraction of the larger number from the smaller number. This lets you make predictions about their behavior, like that they’ll conclude that 36 – 29 should be 13. Then you can run experiments to test your model of their cognition.
It’s quite shocking & confusing when the student writes “23” for that last problem. Seriously, I had a student who did this once. It’s a case study I used to give as a puzzle in the opening CFAR class. It turns out they’re following a very consistent rule! But most bright people have trouble seeing what it is even after twenty examples.
It might not be worth my time and/or energy to dive into someone’s psyche this way. But it seems super duper important as a matter of lucid map/territory clarity to remember that I could and that what they’re doing makes sense on the inside.
All of which is to say:
In practice I don’t think there’s such a thing as someone “actually just talking nonsense”.
For what that’s worth.
That’s an interesting observation! I’ve had something like this experience when teaching programming, going from trying to explain my approach to “oh I see, you have a different approach and it makes sense too”. Or just getting some partial understanding of what mental algorithm the student is executing and in with ways it fails to match with reality.
I am wary of trying to understanding people too hard. A lot of things people say is a network running in reverse, a bullshit generator for whatever conclusion they already decided on, whatever outcome that’s socially convenient to them. Sure, it makes sense to them on the inside—but soon (if it’s more convenient to them) they’ll believe a slightly different, or a very different thing, without ever noticing they changed their mind.
I suppose the true-understanding here would be to notice the parts of their belief systems are more “real”/invariant, the parts are that are unreal/contradictory, and the invisible presence in the background that is pressuring the belief system into whatever shape it is at the moment.
So like I agree that “understanding what process is generating that output” is a better state to be in than “not understanding what process is generating that output”. But I don’t think “nonsense” is entirely in the map; consider
You might be able to guess what process generated that string of words. But I wouldn’t say “that process isn’t generating nonsense, it’s just outputting the max-probability result from such-and-such statistical model”. Rather, I’d say “that process is generating nonsense because it’s just outputting....”
This leaves open a bunch of questions like
Can we come up with a sensible definition of “nonsense”?
Given such a definition, was Lacan talking nonsense?
What about a person with Receptive (Wernicke’s) aphasia?
Time Cube?
In fact, is talking nonsense a thing humans basically ever do in practice?
Do some people have a tendency to too-quickly assume nonsense, when in fact they simply don’t understand some not-nonsense? (How many such people? What are some factors that tend to predict them making this mistake?)
Do some people have the opposite tendency? (How many? What are some factors?)
I think your framing hides those questions without answering them.
As a tangent, it might be that Billy did piecewise subtraction in this instance partly because he wasn’t paying attention. (Where “because” is imprecise but I don’t feel like unpacking right now.) “Billy wasn’t paying attention” is a claim about the territory that makes predictions about what Billy will do in future, which have partial but not total overlap with the predictions made by the claim “Billy did piecewise subtraction”. (Some people who do piecewise subtraction will do it regularly, and some will do it only when not paying attention.)
Of course, if you’re going to stop at “not paying attention” and not notice the “piecewise subtraction” thing, that seems bad. And if you’re going to notice that Billy got the problem wrong and say “not paying attention” without checking whether Billy was paying attention, that seems bad too.