First, the issue somewhat drifted from “to what degree should you update on the basis of what looks stupid” to “how careful you need to be about updating your opinion of your opponents in an argument”.
I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students’ answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it’s not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they’re discussing conservations or arguments- and you may want to be clearer in the future about context changes.)
I am not primarily talking about arguments, I’m talking about the more general case of observing someone being stupid and updating on this basis towards the “this person is stupid” hypothesis.
Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).
Given this, who is doing the trusting or distrusting?
Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).
Should she tell herself her hardware is untrustworthy and invite Bob overnight?
From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she’s trying to overcome, and a trusted (probably female) friend doesn’t get the same threatening vibe from Bob.
True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one?
You don’t directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?
As a more mathematical example, in the iterated prisoner’s dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.
I don’t see why.
This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people’s motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe “If I were to greet to Mallory, he would snub me,” and thus in order to avoid the status hit I don’t say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
(For the PoC specifically, the hypothetical is generally “if I put extra effort into communicating with Mallory, that effort would be wasted,” where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort. This is why RobinZ argues for disengaging with “I don’t have the time for this” rather than “I don’t think you’re worth my time.”)
But, as I’ve been saying in my responses to RobinZ, for me this doesn’t fall under the principle of charity, this falls under the principle of “don’t be an idiot yourself”.
I think that “don’t be an idiot” is far too terse a package. It’s like boiling down moral instruction to “be good,” without any hint that “good” is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just “don’t be an idiot” or would you point them to a list of biases and counterbiasing principles?
For an explanation, agreed; for a label, disagreed. That is, I think it’s important to reduce “don’t be an idiot” into its many subcomponents, and identify them separately whenever possible,
perhaps she has social anxiety or paranoia she’s trying to overcome
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it’s worth it, sometimes not.
where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort.
Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well… I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
I think that “don’t be an idiot” is far too terse a package.
Yes, but properly unpacking it will take between one and several books at best :-/
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
For people, is there a meaningful difference between the two? The primary difference between “your software is buggy” and “your hardware is untrustworthy” that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).
I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn’t mean you’re not overestimating that value, and we’re back to the my root issue that your response to “your judgment of other people might be flawed” seems to be “but I’ve judged them already, why should I do it twice?”
Yes, but properly unpacking it will take between one and several books at best :-/
Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
For people, is there a meaningful difference between the two?
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
“but I’ve judged them already, why should I do it twice?”
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
I don’t think this is a meaningful distinction for people.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
In more general terms, hardware = brain and software = mind.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
you want me to actively seek new evidence and I think it’s rarely worthwhile.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
Echoing the others, this is more dualistic than I’m comfortable with
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
I do hope that you noticed that this still relies on a potentially biased judgment
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students’ answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it’s not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they’re discussing conservations or arguments- and you may want to be clearer in the future about context changes.)
Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).
Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).
From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she’s trying to overcome, and a trusted (probably female) friend doesn’t get the same threatening vibe from Bob.
You don’t directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?
As a more mathematical example, in the iterated prisoner’s dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.
This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people’s motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe “If I were to greet to Mallory, he would snub me,” and thus in order to avoid the status hit I don’t say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
(For the PoC specifically, the hypothetical is generally “if I put extra effort into communicating with Mallory, that effort would be wasted,” where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort. This is why RobinZ argues for disengaging with “I don’t have the time for this” rather than “I don’t think you’re worth my time.”)
I think that “don’t be an idiot” is far too terse a package. It’s like boiling down moral instruction to “be good,” without any hint that “good” is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just “don’t be an idiot” or would you point them to a list of biases and counterbiasing principles?
In Lumifer’s defense, this thread demonstrates pretty conclusively that “the principle of charity” is also far too terse a package. (:
For an explanation, agreed; for a label, disagreed. That is, I think it’s important to reduce “don’t be an idiot” into its many subcomponents, and identify them separately whenever possible,
Mm—that makes sense.
Well, not quite, I think the case here was/is that we just assign different meanings to these words.
P.S. And here is yet another meaning...
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it’s worth it, sometimes not.
Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well… I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
Yes, but properly unpacking it will take between one and several books at best :-/
For people, is there a meaningful difference between the two? The primary difference between “your software is buggy” and “your hardware is untrustworthy” that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).
I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn’t mean you’re not overestimating that value, and we’re back to the my root issue that your response to “your judgment of other people might be flawed” seems to be “but I’ve judged them already, why should I do it twice?”
Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
Huh, I don’t think I’ve ever understood that metaphor before. Thanks. It’s oddly dualist.
I’ll say it again: the PoC isn’t at all about when’s worth investing effort in talking to someone.