This is an excellent point. The more relevant boundary seems like the one we usually refer to with the phrase “should have known”—and indeed this is more or less the notion that the courts use.
The question, then, is: do we have a satisfying account of “should have known”? If so: can we describe it sensibly and concisely? If not: can we formulate one?
I roughly agree with this being the most promising direction. In my mind the problem isn’t “did so-and-so lie, or rationalize?” the question is “was so-and-so demonstratably epistemically negligent?”. If so, and if you can fairly apply disincentives (or, positive incentives on how to be epistemically non-negligent), then the first question just doesn’t matter.
In actual law, we have particular rules about what people are expected to know. It is possible we could construct such rules for LessWrong and/or the surrounding ecosystems, but I think doing so is legitimately challenging.
I disagree that answering the first question doesn’t matter—that’s a very extreme “mistake theory” lens.
If someone is actively adversarial vs. biased but open to learning, that changes quite a bit about how leaders and others in the community should approach the situation.
I do agree that it’s important to have the “are they actively adversarial” hypothesis and corresponding language. (This is why I’ve generally argued against the conflation of lying and rationalization).
But I also think, at least in most of the disagreements and conflicts I’ve seen so far, much of the problem has had more to do with rationalization (or, in some cases, different expectations of how much effort to put into intellectual integrity)
I think there is also an undercurrent of genuine conflict (as people jockey for money/status) that manifests primarily through rationalization, and in some cases duplicity.*
*where the issue is less about people lying but is about them semi-consciously presenting different faces to different people.
Indeed, I agree that it would be more challenging for us, and I have some thoughts about why that would be and how to mitigate it. That said, I think the most productive and actionable way to make progress on this is to look into the relevant legal standards: what standards are applied in criminal proceedings (in the U.S.? elsewhere?) to “should have known”? to cases of civil liability? contract law? corporate law? etc. By looking at what constraints these sorts of situations place on people, and what epistemic obligations are assumed, we can get some insight into how our needs might be similar and/or different, compared to those contexts, which should give us ideas on how to formulate the relevant norms.
I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.
There have been recent shifts in ideas about what counts as ‘epistemically negligent’ [and that’s a great phrase by the way!], at least among some groups of people with which I’m familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.
This is an excellent point. The more relevant boundary seems like the one we usually refer to with the phrase “should have known”—and indeed this is more or less the notion that the courts use.
The question, then, is: do we have a satisfying account of “should have known”? If so: can we describe it sensibly and concisely? If not: can we formulate one?
I roughly agree with this being the most promising direction. In my mind the problem isn’t “did so-and-so lie, or rationalize?” the question is “was so-and-so demonstratably epistemically negligent?”. If so, and if you can fairly apply disincentives (or, positive incentives on how to be epistemically non-negligent), then the first question just doesn’t matter.
In actual law, we have particular rules about what people are expected to know. It is possible we could construct such rules for LessWrong and/or the surrounding ecosystems, but I think doing so is legitimately challenging.
I disagree that answering the first question doesn’t matter—that’s a very extreme “mistake theory” lens.
If someone is actively adversarial vs. biased but open to learning, that changes quite a bit about how leaders and others in the community should approach the situation.
I do agree that it’s important to have the “are they actively adversarial” hypothesis and corresponding language. (This is why I’ve generally argued against the conflation of lying and rationalization).
But I also think, at least in most of the disagreements and conflicts I’ve seen so far, much of the problem has had more to do with rationalization (or, in some cases, different expectations of how much effort to put into intellectual integrity)
I think there is also an undercurrent of genuine conflict (as people jockey for money/status) that manifests primarily through rationalization, and in some cases duplicity.*
*where the issue is less about people lying but is about them semi-consciously presenting different faces to different people.
Indeed, I agree that it would be more challenging for us, and I have some thoughts about why that would be and how to mitigate it. That said, I think the most productive and actionable way to make progress on this is to look into the relevant legal standards: what standards are applied in criminal proceedings (in the U.S.? elsewhere?) to “should have known”? to cases of civil liability? contract law? corporate law? etc. By looking at what constraints these sorts of situations place on people, and what epistemic obligations are assumed, we can get some insight into how our needs might be similar and/or different, compared to those contexts, which should give us ideas on how to formulate the relevant norms.
I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.
There have been recent shifts in ideas about what counts as ‘epistemically negligent’ [and that’s a great phrase by the way!], at least among some groups of people with which I’m familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.