In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn’tve banned either
It seems that the issues with ialdabaoth’s argumentation only appear in comments made after the allegations related to other behaviors of his. Therefore the argument that he is being banned for his epistemic tactics rather than his misconduct is just removing the issue up one step:
Ialdabaoth is being banned for his poor epistemic tactics, not his conduct.
But his epistemic tactics are manipulative because of his conduct.
So the hypothetical where he was accused of sex crimes[1] and people didn’t mind his epistemic tactics isn’t a hypothetical. It was actuality. What we’ve observed is that after the accusations, certain people went from being fine with his epistemic tactics to not fine with them.
Which I don’t believe is actually true. I have read the relevant literature and no post makes an accusation that a crime has been committed, only manipulative sexual behavior. I will hedge this statement by acknowledging that I do not know the full situation.
I mistrusted ialdabaoth from the start, though it’s worth saying that I judged him to be a dangerous manipulator and probable abuser from in-person interactions long before the accusations came out, so it’s not just his LessWrong content.
In any case, I found it impossible to argue on his own terms (not because he’d make decent counterarguments, but because he’d try to corrupt the frame of the conversation instead of making counterarguments). So instead I did things like write this post as a direct rebuttal to something he’d written (maybe on LessWrong, maybe on Facebook) about how honesty and consent were fake concepts used to disguise the power differentials of popularity (which ultimately culminates in an implied “high status people sometimes get away with bad behavior X, so don’t condemn me when I do X”.)
I agree with this point, and it’s what originally motivated this paragraph of the OP:
It does seem important to point out that some of the standards used to assess that risk stem from processing what happened in the wake of the allegations. A brief characterization is that I think the community started to take more seriously not just the question of “will I be better off adopting this idea?” but also the question “will this idea mislead someone else, or does it seem designed to?”. If I had my current standards in 2017, I think they would have sufficed to ban ialdabaoth then, or at least do more to identify the need for arguing against the misleading parts of his ideas.
One nonobvious point from this is that 2017 is well before the accusations were made, but a point at which I think there was sufficient community unease that a consensus could have been built if we had the systems to build that consensus absent accusations.
OK but what’s actually being done is a one-off ban of someone with multiple credible public rape allegations against them. The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that and I don’t see how anyone on the mod team is doing anything best explained by an attempt to solve that problem.
OK but what’s actually being done is a one-off ban of someone with multiple credible public rape allegations against them.
Also, what’s actually being done is a one-off ban of a user whose name starts with ‘i.’ That is, yes, I agree with the facts you present, and contest the claim of relevance / the act of presenting an interpretation as if it were a brute fact.
There is a symmetry to the situation, of course, where I am reporting what I believe my intentions are / the interpretation I was operating under while I made the decision, but no introspective access is perfect, and perhaps there are counterfactuals where our models predict different things and it would have gone the way you predict instead of the way I predict. Even so, I think it would be a mistake to not have the stated motivation as a hypothesis in your model to update towards or against as time goes on.
The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that
According to me, the relevance is that this action was taken to further that policy goal; I agree it is only weak evidence that we will succeed at that goal or even successfully implement policies that work towards that goal. I view this as a declaration of intent, not success, and specifically the intent that “next time, we will act against people who are highly manipulative and deceitful before they have clear victims” instead of the more achievable but less useful “once there’s consensus you committed crimes, not posting on LW is part of your punishment”.
It seems that the issues with ialdabaoth’s argumentation only appear in comments made after the allegations related to other behaviors of his. Therefore the argument that he is being banned for his epistemic tactics rather than his misconduct is just removing the issue up one step:
So the hypothetical where he was accused of sex crimes[1] and people didn’t mind his epistemic tactics isn’t a hypothetical. It was actuality. What we’ve observed is that after the accusations, certain people went from being fine with his epistemic tactics to not fine with them.
Which I don’t believe is actually true. I have read the relevant literature and no post makes an accusation that a crime has been committed, only manipulative sexual behavior. I will hedge this statement by acknowledging that I do not know the full situation.
I mistrusted ialdabaoth from the start, though it’s worth saying that I judged him to be a dangerous manipulator and probable abuser from in-person interactions long before the accusations came out, so it’s not just his LessWrong content.
In any case, I found it impossible to argue on his own terms (not because he’d make decent counterarguments, but because he’d try to corrupt the frame of the conversation instead of making counterarguments). So instead I did things like write this post as a direct rebuttal to something he’d written (maybe on LessWrong, maybe on Facebook) about how honesty and consent were fake concepts used to disguise the power differentials of popularity (which ultimately culminates in an implied “high status people sometimes get away with bad behavior X, so don’t condemn me when I do X”.)
I agree with this point, and it’s what originally motivated this paragraph of the OP:
One nonobvious point from this is that 2017 is well before the accusations were made, but a point at which I think there was sufficient community unease that a consensus could have been built if we had the systems to build that consensus absent accusations.
OK but what’s actually being done is a one-off ban of someone with multiple credible public rape allegations against them. The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that and I don’t see how anyone on the mod team is doing anything best explained by an attempt to solve that problem.
Also, what’s actually being done is a one-off ban of a user whose name starts with ‘i.’ That is, yes, I agree with the facts you present, and contest the claim of relevance / the act of presenting an interpretation as if it were a brute fact.
There is a symmetry to the situation, of course, where I am reporting what I believe my intentions are / the interpretation I was operating under while I made the decision, but no introspective access is perfect, and perhaps there are counterfactuals where our models predict different things and it would have gone the way you predict instead of the way I predict. Even so, I think it would be a mistake to not have the stated motivation as a hypothesis in your model to update towards or against as time goes on.
According to me, the relevance is that this action was taken to further that policy goal; I agree it is only weak evidence that we will succeed at that goal or even successfully implement policies that work towards that goal. I view this as a declaration of intent, not success, and specifically the intent that “next time, we will act against people who are highly manipulative and deceitful before they have clear victims” instead of the more achievable but less useful “once there’s consensus you committed crimes, not posting on LW is part of your punishment”.