the principle of charity does actually result in a map shift relative to the default.
What is the default? And is it everyone’s default, or only the unenlightened ones’, or whose?
This implies that the “default” map is wrong—correct?
if you have not used the principle of charity in reaching the belief
I don’t quite understand that. When I’m reaching a particular belief, I basically do it to the best of my ability—if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard—that I should apply it anyway even if I don’t think it’s needed?
An attribution error is an attribution error—if you recognize it you should fix it, and not apply global corrections regardless.
This implies that the “default” map is wrong—correct?
I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because “most people are too unkind here; you should be kinder to try to correct,” and if you have somehow hit the correct level of kindness, then no further change is necessary.
I don’t quite understand that. When I’m reaching a particular belief, I basically do it to the best of my ability—if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard
I think that the principle of charity is like other biases.
that I should apply it anyway even if I don’t think it’s needed?
This question seems just weird to me. How do you know you can trust your cognitive system that says “nah, I’m not being biased right now”? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn’t take them seriously!
(There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)
This question seems just weird to me. How do you know you can trust your cognitive system that says “nah, I’m not being biased right now”?
It’s weird to me that the question is weird to you X-/
You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.
If you can’t trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.
It’s weird to me that the question is weird to you X-/
To me, a fundamental premise of the bias-correction project is “you are running on untrustworthy hardware.” That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind.
There’s more, but I think in order to explain that better I should jump to this first:
If you can’t trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.
You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a ‘motive-detection system,’ which is normally rather accurate but loses accuracy when used on opponents. Then there’s a higher-level system that is a ‘bias-detection system’ which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted ‘statistical inference’ system to verify the results from my ‘bias-detection’ system, which then informs how I use the results from my ‘motive-detection system.’
Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. “All my opponents are malevolent or idiots, because I think they are.” The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I’ve developed a good sense of how unkind I become when considering my opponents.
If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them, or by discarding that belief and seeing if the evidence causes it to regrow. I word it this way because one needs to move to the place of uncertainty, and then consider the hypotheses, rather than saying “Is my belief that my opponents are malevolent idiots correct? Well, let’s consider all the pieces of evidence that come to mind right now: yes, they are evil and stupid! Myth confirmed.”
Which brings us to here:
You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.
Your cognitive system has a rather large degree of control over the reality that you perceive; to a large extent, that is the point of having a cognitive system. Unless the ‘usual way’ of verifying the accuracy of your cognitive system takes that into account, which it does not do by default for most humans, then this will not remove most biases. For example, could you detect confirmation bias by checking whether more complete evaluations corroborate your initial perception? Not really- you need to have internalized the idea of ‘confirmation bias’ in order to define ‘more complete evaluations’ to mean ‘evaluations where I seek out disconfirming evidence also’ rather than just ‘evaluations where I accumulate more evidence.’
[Edit]: On rereading this comment, the primary conclusion I was going for- that PoC encompasses both procedural and epistemic shifts, which are deeply entwined with each other- is there but not as clear as I would like.
Before I get into the response, let me make a couple of clarifying points.
First, the issue somewhat drifted from “to what degree should you update on the basis of what looks stupid” to “how careful you need to be about updating your opinion of your opponents in an argument”. I am not primarily talking about arguments, I’m talking about the more general case of observing someone being stupid and updating on this basis towards the “this person is stupid” hypothesis.
Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn’t exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes “ha ha duh of course evolution is correct my textbook says so what u dumb?”, well then...
“you are running on untrustworthy hardware.”
I don’t like this approach. Mainly this has to do with the fact that unrolling “untrustworthy” makes it very messy.
As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does “trust” even mean?
I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing.
Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice’s conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast—for no good reason that her consciousness can discern. Basically she has a really bad feeling about Bob for no articulable reason. Should she tell herself her hardware is untrustworthy and invite Bob overnight?
The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased.
True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one? Does additional evidence support your initial reaction? If it does, you can probably trust your initial reactions more. If it does not, you can’t and should adjust.
Yes, I know about anchoring and such. But again, at some point you have to trust yourself (or some modules of yourself) because if you can’t there is just no firm ground to stand on at all.
If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by … discarding that belief and seeing if the evidence causes it to regrow.
I don’t see why. Just do the usual Bayesian updating on the evidence. If the weight of the accumulated evidence points out that they are not, well, update. Why do you have to discard your prior in order to do that?
you need to have internalized the idea of ‘confirmation bias’ in order to define ‘more complete evaluations’ to mean ‘evaluations where I seek out disconfirming evidence also’ rather than just ‘evaluations where I accumulate more evidence.’
Yep. Which is why the Sequences, the Kahneman & Tversky book, etc. are all very useful. But, as I’ve been saying in my responses to RobinZ, for me this doesn’t fall under the principle of charity, this falls under the principle of “don’t be an idiot yourself”.
First, the issue somewhat drifted from “to what degree should you update on the basis of what looks stupid” to “how careful you need to be about updating your opinion of your opponents in an argument”.
I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students’ answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it’s not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they’re discussing conservations or arguments- and you may want to be clearer in the future about context changes.)
I am not primarily talking about arguments, I’m talking about the more general case of observing someone being stupid and updating on this basis towards the “this person is stupid” hypothesis.
Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).
Given this, who is doing the trusting or distrusting?
Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).
Should she tell herself her hardware is untrustworthy and invite Bob overnight?
From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she’s trying to overcome, and a trusted (probably female) friend doesn’t get the same threatening vibe from Bob.
True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one?
You don’t directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?
As a more mathematical example, in the iterated prisoner’s dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.
I don’t see why.
This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people’s motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe “If I were to greet to Mallory, he would snub me,” and thus in order to avoid the status hit I don’t say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
(For the PoC specifically, the hypothetical is generally “if I put extra effort into communicating with Mallory, that effort would be wasted,” where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort. This is why RobinZ argues for disengaging with “I don’t have the time for this” rather than “I don’t think you’re worth my time.”)
But, as I’ve been saying in my responses to RobinZ, for me this doesn’t fall under the principle of charity, this falls under the principle of “don’t be an idiot yourself”.
I think that “don’t be an idiot” is far too terse a package. It’s like boiling down moral instruction to “be good,” without any hint that “good” is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just “don’t be an idiot” or would you point them to a list of biases and counterbiasing principles?
For an explanation, agreed; for a label, disagreed. That is, I think it’s important to reduce “don’t be an idiot” into its many subcomponents, and identify them separately whenever possible,
perhaps she has social anxiety or paranoia she’s trying to overcome
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it’s worth it, sometimes not.
where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort.
Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well… I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
I think that “don’t be an idiot” is far too terse a package.
Yes, but properly unpacking it will take between one and several books at best :-/
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
For people, is there a meaningful difference between the two? The primary difference between “your software is buggy” and “your hardware is untrustworthy” that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).
I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn’t mean you’re not overestimating that value, and we’re back to the my root issue that your response to “your judgment of other people might be flawed” seems to be “but I’ve judged them already, why should I do it twice?”
Yes, but properly unpacking it will take between one and several books at best :-/
Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
For people, is there a meaningful difference between the two?
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
“but I’ve judged them already, why should I do it twice?”
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
I don’t think this is a meaningful distinction for people.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
In more general terms, hardware = brain and software = mind.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
you want me to actively seek new evidence and I think it’s rarely worthwhile.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
Echoing the others, this is more dualistic than I’m comfortable with
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
I do hope that you noticed that this still relies on a potentially biased judgment
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
What is the default? And is it everyone’s default, or only the unenlightened ones’, or whose?
This implies that the “default” map is wrong—correct?
I don’t quite understand that. When I’m reaching a particular belief, I basically do it to the best of my ability—if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard—that I should apply it anyway even if I don’t think it’s needed?
An attribution error is an attribution error—if you recognize it you should fix it, and not apply global corrections regardless.
I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because “most people are too unkind here; you should be kinder to try to correct,” and if you have somehow hit the correct level of kindness, then no further change is necessary.
I think that the principle of charity is like other biases.
This question seems just weird to me. How do you know you can trust your cognitive system that says “nah, I’m not being biased right now”? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn’t take them seriously!
(There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)
It’s weird to me that the question is weird to you X-/
You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.
If you can’t trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.
To me, a fundamental premise of the bias-correction project is “you are running on untrustworthy hardware.” That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind.
There’s more, but I think in order to explain that better I should jump to this first:
You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a ‘motive-detection system,’ which is normally rather accurate but loses accuracy when used on opponents. Then there’s a higher-level system that is a ‘bias-detection system’ which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted ‘statistical inference’ system to verify the results from my ‘bias-detection’ system, which then informs how I use the results from my ‘motive-detection system.’
Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. “All my opponents are malevolent or idiots, because I think they are.” The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I’ve developed a good sense of how unkind I become when considering my opponents.
If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them, or by discarding that belief and seeing if the evidence causes it to regrow. I word it this way because one needs to move to the place of uncertainty, and then consider the hypotheses, rather than saying “Is my belief that my opponents are malevolent idiots correct? Well, let’s consider all the pieces of evidence that come to mind right now: yes, they are evil and stupid! Myth confirmed.”
Which brings us to here:
Your cognitive system has a rather large degree of control over the reality that you perceive; to a large extent, that is the point of having a cognitive system. Unless the ‘usual way’ of verifying the accuracy of your cognitive system takes that into account, which it does not do by default for most humans, then this will not remove most biases. For example, could you detect confirmation bias by checking whether more complete evaluations corroborate your initial perception? Not really- you need to have internalized the idea of ‘confirmation bias’ in order to define ‘more complete evaluations’ to mean ‘evaluations where I seek out disconfirming evidence also’ rather than just ‘evaluations where I accumulate more evidence.’
[Edit]: On rereading this comment, the primary conclusion I was going for- that PoC encompasses both procedural and epistemic shifts, which are deeply entwined with each other- is there but not as clear as I would like.
Before I get into the response, let me make a couple of clarifying points.
First, the issue somewhat drifted from “to what degree should you update on the basis of what looks stupid” to “how careful you need to be about updating your opinion of your opponents in an argument”. I am not primarily talking about arguments, I’m talking about the more general case of observing someone being stupid and updating on this basis towards the “this person is stupid” hypothesis.
Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn’t exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes “ha ha duh of course evolution is correct my textbook says so what u dumb?”, well then...
I don’t like this approach. Mainly this has to do with the fact that unrolling “untrustworthy” makes it very messy.
As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does “trust” even mean?
I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing.
Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice’s conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast—for no good reason that her consciousness can discern. Basically she has a really bad feeling about Bob for no articulable reason. Should she tell herself her hardware is untrustworthy and invite Bob overnight?
True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one? Does additional evidence support your initial reaction? If it does, you can probably trust your initial reactions more. If it does not, you can’t and should adjust.
Yes, I know about anchoring and such. But again, at some point you have to trust yourself (or some modules of yourself) because if you can’t there is just no firm ground to stand on at all.
I don’t see why. Just do the usual Bayesian updating on the evidence. If the weight of the accumulated evidence points out that they are not, well, update. Why do you have to discard your prior in order to do that?
Yep. Which is why the Sequences, the Kahneman & Tversky book, etc. are all very useful. But, as I’ve been saying in my responses to RobinZ, for me this doesn’t fall under the principle of charity, this falls under the principle of “don’t be an idiot yourself”.
I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students’ answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it’s not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they’re discussing conservations or arguments- and you may want to be clearer in the future about context changes.)
Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).
Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).
From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she’s trying to overcome, and a trusted (probably female) friend doesn’t get the same threatening vibe from Bob.
You don’t directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?
As a more mathematical example, in the iterated prisoner’s dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.
This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people’s motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe “If I were to greet to Mallory, he would snub me,” and thus in order to avoid the status hit I don’t say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don’t greet Mallory, then I don’t get any evidence!
(For the PoC specifically, the hypothetical is generally “if I put extra effort into communicating with Mallory, that effort would be wasted,” where the PoC argues that you’ve probably overestimated the probability that you’ll waste effort. This is why RobinZ argues for disengaging with “I don’t have the time for this” rather than “I don’t think you’re worth my time.”)
I think that “don’t be an idiot” is far too terse a package. It’s like boiling down moral instruction to “be good,” without any hint that “good” is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just “don’t be an idiot” or would you point them to a list of biases and counterbiasing principles?
In Lumifer’s defense, this thread demonstrates pretty conclusively that “the principle of charity” is also far too terse a package. (:
For an explanation, agreed; for a label, disagreed. That is, I think it’s important to reduce “don’t be an idiot” into its many subcomponents, and identify them separately whenever possible,
Mm—that makes sense.
Well, not quite, I think the case here was/is that we just assign different meanings to these words.
P.S. And here is yet another meaning...
That’s not the case where she shouldn’t trust her hardware—that’s the case where her software has a known bug.
Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it’s worth it, sometimes not.
Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well… I think you underappreciate opportunity costs—there are a LOT of people around and most of them aren’t very interesting.
Yes, but properly unpacking it will take between one and several books at best :-/
For people, is there a meaningful difference between the two? The primary difference between “your software is buggy” and “your hardware is untrustworthy” that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).
I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn’t mean you’re not overestimating that value, and we’re back to the my root issue that your response to “your judgment of other people might be flawed” seems to be “but I’ve judged them already, why should I do it twice?”
Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.
I said I will update on the evidence. The difference seems to be that you consider that insufficient—you want me to actively seek new evidence and I think it’s rarely worthwhile.
I don’t think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of ‘mind’) after a stroke.
You don’t think it’s meaningful to model people as having a hardware layer and a software layer? Why?
Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
Echoing the others, this is more dualistic than I’m comfortable with. It looks to me that in people, you just have ‘wetware’ that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.
Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it’s rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we’re at mutual understanding.
To quote myself, we’re talking about “model[ing] people as having a hardware layer and a software layer”. And to quote Monty Python, it’s only a model. It is appropriate for some uses and inappropriate for others. For example, I think it’s quite appropriate for a neurosurgeon. But it’s probably not as useful for thinking about biofeedback, to give another example.
Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
Huh, I don’t think I’ve ever understood that metaphor before. Thanks. It’s oddly dualist.
I’ll say it again: the PoC isn’t at all about when’s worth investing effort in talking to someone.
What is the reality about whether you interpreted someone correct.y? When do you hit the bedrock of Real Meaning?
tldr; The principle of charity correct biases you’re not aware of.