Amusingly, one possible explanation is that the people who gave Gleb pushback on here were operating on bad-faith-detecting intuitions—this is supported by the quick reaction time. I’d say that those intuitions were good ones, if they lead to those folks giving Gleb pushback on a quick timescale, and I’d also say that those intuitions shaped healthy norms to the extent that they nudged us towards establishing a quick reality-grounded social feedback loop.
But the people who did give Gleb pushback more frequently framed things in terms other than them having bad-faith-detecting intuitions than you’d have guessed, if they were actually concluding that giving Gleb pushback was worth their time based on their intuitions—they pointed to specific behaviors, and so on, when calling him out. But how many of these people actually decided to give Gleb feedback because they System-2-noticed that he was implementing a specific behavior, and how many of us decided to give Gleb feedback because our bad-faith-detecting intuitions noticed something was up, which led us to fish around for a specific bad behavior that Gleb was doing?
If more of us did the latter, this suggests that we have social incentives in place that reward fishing around and finding specific bad behaviors, but to me, fishing around for bad behaviors (i.e. fishing through data) like this doesn’t seem too much different from p-hacking, except that fishing around for social data is way harder to call people out on. And if our real reasons for reaching the correct conclusion that Gleb needed to get pushback were based in bad-faith-detecting intuitions, and not in System 2 noticing bad behaviors, then maybe providing social allowance for the mechanism that actually led some of us to detect Gleb a bit earlier to do its work on its own in the future, rather than requiring its use to be backed up by evidence of bad behaviors (junk data) that can be both p-hacked by those who want to criticize independently of what was true, or hidden by those with more skill than Gleb, would be a good idea.
At a minimum, being honest with ourselves about what our real reasons are ought to help us understand our minds a bit better.
But how many of these people actually decided to give Gleb feedback because they System-2-noticed that he was implementing a specific behavior, and how many of us decided to give Gleb feedback because our bad-faith-detecting intuitions noticed something was up, which led us to fish around for a specific bad behavior that Gleb was doing?
I don’t know if you can separate it this cleanly. Sometimes you get a smells-funny feeling and then your System 2 goes to investigate. But sometimes—and I think this was the case with Gleb—both System 1 and System 2 look at each other and chorus “Really, dude?” :-)
nod. This does seem like it should be a continuous thing, rather than System 1 solely figuring things out in some cases and System 2 figuring it out alone in others.
I sent a few private notes to him early on about the way I reacted to his posts. This wasn’t a “bad faith” detector ( I don’t actually buy the premise—such a thing is VERY uncommon compared to honest incorrect values and beliefs), this was a pattern match to an overzealous overconfident newbie, possibly with under-developed social skills. You know, just like all of us a few years (or in my case decades) ago.
This all sounds right, but the reasoning behind using the wording of “bad faith” is explained in the second bullet point of this comment.
Tl;dr the module your brain has for detecting things that feel like “bad faith” is good at detecting when someone is acting in ways that cause bad consequences in expectation but don’t feel like “bad faith” to the other person on the inside. If people could learn to correct a subset of these actions by learning, say, common social skills, treating those actions like they’re taken in “bad faith” incentivizes them to learn those skills, which results in you having to live with negative consequences from dealing with that person less. I’d say that this is part of why our minds often read well-intentioned-but-harmful-in-expectation behaviors as “bad faith”; it’s a way of correcting them.
Good observation.
Amusingly, one possible explanation is that the people who gave Gleb pushback on here were operating on bad-faith-detecting intuitions—this is supported by the quick reaction time. I’d say that those intuitions were good ones, if they lead to those folks giving Gleb pushback on a quick timescale, and I’d also say that those intuitions shaped healthy norms to the extent that they nudged us towards establishing a quick reality-grounded social feedback loop.
But the people who did give Gleb pushback more frequently framed things in terms other than them having bad-faith-detecting intuitions than you’d have guessed, if they were actually concluding that giving Gleb pushback was worth their time based on their intuitions—they pointed to specific behaviors, and so on, when calling him out. But how many of these people actually decided to give Gleb feedback because they System-2-noticed that he was implementing a specific behavior, and how many of us decided to give Gleb feedback because our bad-faith-detecting intuitions noticed something was up, which led us to fish around for a specific bad behavior that Gleb was doing?
If more of us did the latter, this suggests that we have social incentives in place that reward fishing around and finding specific bad behaviors, but to me, fishing around for bad behaviors (i.e. fishing through data) like this doesn’t seem too much different from p-hacking, except that fishing around for social data is way harder to call people out on. And if our real reasons for reaching the correct conclusion that Gleb needed to get pushback were based in bad-faith-detecting intuitions, and not in System 2 noticing bad behaviors, then maybe providing social allowance for the mechanism that actually led some of us to detect Gleb a bit earlier to do its work on its own in the future, rather than requiring its use to be backed up by evidence of bad behaviors (junk data) that can be both p-hacked by those who want to criticize independently of what was true, or hidden by those with more skill than Gleb, would be a good idea.
At a minimum, being honest with ourselves about what our real reasons are ought to help us understand our minds a bit better.
I don’t know if you can separate it this cleanly. Sometimes you get a smells-funny feeling and then your System 2 goes to investigate. But sometimes—and I think this was the case with Gleb—both System 1 and System 2 look at each other and chorus “Really, dude?” :-)
nod. This does seem like it should be a continuous thing, rather than System 1 solely figuring things out in some cases and System 2 figuring it out alone in others.
I sent a few private notes to him early on about the way I reacted to his posts. This wasn’t a “bad faith” detector ( I don’t actually buy the premise—such a thing is VERY uncommon compared to honest incorrect values and beliefs), this was a pattern match to an overzealous overconfident newbie, possibly with under-developed social skills. You know, just like all of us a few years (or in my case decades) ago.
This all sounds right, but the reasoning behind using the wording of “bad faith” is explained in the second bullet point of this comment.
Tl;dr the module your brain has for detecting things that feel like “bad faith” is good at detecting when someone is acting in ways that cause bad consequences in expectation but don’t feel like “bad faith” to the other person on the inside. If people could learn to correct a subset of these actions by learning, say, common social skills, treating those actions like they’re taken in “bad faith” incentivizes them to learn those skills, which results in you having to live with negative consequences from dealing with that person less. I’d say that this is part of why our minds often read well-intentioned-but-harmful-in-expectation behaviors as “bad faith”; it’s a way of correcting them.