I’m very glad that you asked this! I think we can come up with some decent heuristics:
If you start out with some sort of inbuilt bad faith detector, try to see when, in retrospect, it’s given you accurate readings, false positives, and false negatives. I catch myself doing this without having planned to on a System 1 level from time to time. It may be possible, if harder, to do this sort of intuition reshaping in response to evidence with System 2. Note that it sometimes takes a long time, and that sometimes you never figure out, whether or not your bad-faith-detecting intuitions were correct.
There’s debate about whether a bad-faith-detecting intuition that fires when someone “has good intentions” but ends up predictably acting in ways that hurt you (especially to their own benefit) is “correct”. My view is that the intuition is correct; defining it as incorrect and then acting in social accordance with it being incorrect incentivizes others to manipulate you by being/becoming good at making themselves believe they have good intentions when they don’t, which is a way of destroying information in itself. Hence why allowing people to get away with too many plausibly deniable things destroys information: if plausible deniability is a socially acceptable defense when it’s obvious someone has hurt you in a way that benefits them, they’ll want to blind themselves to information about how their own brains work. (This is a reason to disagree with many suggestions made in Nate’s post. If treating people like they generally have positive intentions reduces your ability to do collaborative truth-seeking with others on how their minds can fail in ways that let you down—planning fallacy is one example—then maybe it would be helpful to socially disincentivize people from misleading themselves this way by giving them critical feedback, or at least not tearing people down for being ostracizers when they do the same).
Try to evaluate other’s bad faith detectors by the same mechanism as in the first point; if they give lots of correct readings and not many false ones (especially if they share their intuitions with you before it becomes obvious to you whether or not they’re correct), this is some sort of evidence that they have strong and accurate bad-faith-detecting intuitions.
The above requires that you know someone well enough for them to trust you with this data, so a quicker way to evaluate other’s bad-faith-detecting intuitions is to look at who they give feedback to, criticize, praise, etc. If they end up attacking or socially qualifying popular people who are later revealed to have been acting in bad faith, or if they end up praising or supporting ones who are socially suspected of being up to something who are later revealed to have been acting in good faith, these are strong signals of them having accurate bad-faith-detecting intuitions.
Done right, bad-faith-detecting intuitions should let you make testable predictions about who will impose costs or provide benefits to you and your friends/cause; these intuitions become more valuable as you become more accurate at evaluating them. Bad-faith-detecting intuitions might not “taste” like Officially Approved Scientific Evidence,
and we might not respect them much around here, but they should tie back into reality, and be usable to help you make better decisions than you’d been able to make without using them.
I’m very glad that you asked this! I think we can come up with some decent heuristics:
If you start out with some sort of inbuilt bad faith detector, try to see when, in retrospect, it’s given you accurate readings, false positives, and false negatives. I catch myself doing this without having planned to on a System 1 level from time to time. It may be possible, if harder, to do this sort of intuition reshaping in response to evidence with System 2. Note that it sometimes takes a long time, and that sometimes you never figure out, whether or not your bad-faith-detecting intuitions were correct.
There’s debate about whether a bad-faith-detecting intuition that fires when someone “has good intentions” but ends up predictably acting in ways that hurt you (especially to their own benefit) is “correct”. My view is that the intuition is correct; defining it as incorrect and then acting in social accordance with it being incorrect incentivizes others to manipulate you by being/becoming good at making themselves believe they have good intentions when they don’t, which is a way of destroying information in itself. Hence why allowing people to get away with too many plausibly deniable things destroys information: if plausible deniability is a socially acceptable defense when it’s obvious someone has hurt you in a way that benefits them, they’ll want to blind themselves to information about how their own brains work. (This is a reason to disagree with many suggestions made in Nate’s post. If treating people like they generally have positive intentions reduces your ability to do collaborative truth-seeking with others on how their minds can fail in ways that let you down—planning fallacy is one example—then maybe it would be helpful to socially disincentivize people from misleading themselves this way by giving them critical feedback, or at least not tearing people down for being ostracizers when they do the same).
Try to evaluate other’s bad faith detectors by the same mechanism as in the first point; if they give lots of correct readings and not many false ones (especially if they share their intuitions with you before it becomes obvious to you whether or not they’re correct), this is some sort of evidence that they have strong and accurate bad-faith-detecting intuitions.
The above requires that you know someone well enough for them to trust you with this data, so a quicker way to evaluate other’s bad-faith-detecting intuitions is to look at who they give feedback to, criticize, praise, etc. If they end up attacking or socially qualifying popular people who are later revealed to have been acting in bad faith, or if they end up praising or supporting ones who are socially suspected of being up to something who are later revealed to have been acting in good faith, these are strong signals of them having accurate bad-faith-detecting intuitions.
Done right, bad-faith-detecting intuitions should let you make testable predictions about who will impose costs or provide benefits to you and your friends/cause; these intuitions become more valuable as you become more accurate at evaluating them. Bad-faith-detecting intuitions might not “taste” like Officially Approved Scientific Evidence, and we might not respect them much around here, but they should tie back into reality, and be usable to help you make better decisions than you’d been able to make without using them.