We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban
I’m having trouble convincing myself that this is the real reason. Imagine an alternate universe where their local analogue of Ialdabaoth was just as manipulative, wrote almost all the same things about status, power, social reality, &c., but was definitely not guilty of any sex crimes for reasons that had nothing to do with his moral character. (Perhaps imagine him having some kind of exotic sexual orientation that would be satisfied without human contact, like a statue fetish.) Would we really ban such a person on the grounds of manipulative epistemic tactics?
Your “fraudster designs a financial instrument” scenario explains why one should definitely be suspicious of the value of such a user’s contributions—but frankly, I’m suspicious of a lot of people in that way: how do you decide who to ban?
It occurs to me that the “reputation vs. merit” framing completely fails to address the reasons many would assign such bad reputation. (The function of bad reputation is to track things that are actually bad!) Maybe we just have a “moral taste” for not working with people who (are believed to) have committed sufficiently bad crimes, and we’re willing to pay the cost of forgoing positive contributions from them?
If that’s actually the psychology driving the decision, it would be better to fess up to it rather making up a fake reason. Better for the physics journal editor to honestly say, “Look, I just don’t want to accept a paper from a murderer, okay?”, rather than claiming to be impartial and then motivatedly subjecting the paper to whatever isolated demands for rigor were necessary to achieve the appearance of having rejected it on the merits.
Would we really ban such a person on the grounds of manipulative epistemic tactics?
One of the big updates that I made over the course of this affair was the value of having a community-wide immune system, rather than being content with not getting sick myself. I think this is an example of what that sort of update looks like. Michael isn’t banned from LessWrong, but also hasn’t posted here in a year in a way that makes that question seem somewhat irrelevant. (Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.) [Edit: I forgot about his more recent account, which is still fairly inactive.] [Edit2: I think it was probably a mistake to write the bits of this paragraph after the first sentence, because the example is unclear and mentioning users in the context of bans can have a chilling effect that I didn’t want to have here.]
So far, it seems like lots of things have been of the form: person (or group) has a mixed reputation, but is widely held in low regard (without the extent of that opinion being common knowledge), the generator of the low regard causes an explosion that makes them an outcast, and then after the fact people go “well, we saw that coming individually but didn’t know how to do anything about it socially.” It would be nice if we knew how to do things about it socially; when this happened a year ago, I made a list of “the next ialdabaoth” and one of the top 3 of that list is at the center of current community drama.
[This seems especially important given that normal coordination mechanisms of this form—gossip, picking up on who’s ‘creepy’ and who isn’t—rely on skills many rationalists don’t have, and sometimes have deliberately decided not to acquire.]
the value of having a community-wide immune system, rather than being content with not getting sick myself
I’d be very interested if you could elaborate on what observations make you think “the community” is doing the kind of information-processing that would result in “immune system” norms actually building accurate maps, rather than accelerating our decline into a cult.
It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]
In contrast, the conjunction of the “immune system” metaphor and your mention of Anna’s comment about Michael makes me imagine social norms that make it easier for high-ranking community members to silence potential rivals or whistleblowers by declaring them to be bad thinkers and therefore not worth listening to.
That is, I perceive a huge difference between, “Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I’m joining the coalition for ostracizing X” (analogous to a court) vs. “The mods declared that X uses manipulative epistemic tactics, therefore I’m going to copy that ‘antibody’ and not listen to anything X says” (analogous to an immune system).
But, maybe I’m completely misunderstanding what you meant by “immune system”? It would be great if you could clarify what you’re thinking here.
It would certainly be nice to have a distributed intellectual authority I could trust. I can imagine that such a thing could exist. But painful personal experience has me quite convinced that, under present conditions, there really is just no substitute for thinking for yourself (“not getting sick [one]self”).
It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]
I think the effects of that (on my beliefs, at least) were indirect. The accusations themselves didn’t move me very much, but caused a number of private and semi-public info-sharing conversations that did move me substantially.
That is, I perceive a huge difference between, “Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I’m joining the coalition for ostracizing X” (analogous to a court)
I do want to stress the ways in which the exile of Ialdabaoth does not match my standards for courts (altho I agree it is analogous). The main issue, in my mind at least, is that no one had the clear mandate within the community to ‘try the case’, and those that stepped forward didn’t have broader social recognition of even their limited mandate. (No one could sue a judge or jury for libel if they found ialdabaoth guilty, but the panels that gathered evidence could be sued for libel for publishing their views on ialdabaoth.) And this is before we get to the way in which ‘the case’ was tried in multiple places with varying levels of buy-in from the parties involved.
But, maybe I’m completely misunderstanding what you meant by “immune system”? It would be great if you could clarify what you’re thinking here.
The thing that’s missing, in my mind, is the way in which antibodies get developed and amplified. That is, I’m less concerned with people deciding whether or not to copy a view, and more concerned with the view being put in public in the first place. My sense is that, by default, people rarely publicly share their worries about other people, and this gets worse instead of better if they suspect the person in question is adversarial. (If I think Bob is doing shady things, including silencing his enemies, this makes it harder to ask people what they think of Bob, whereas if Carol is generally incompetent and annoying, this makes it easier to ask people what they think of Carol.)
If you suspect there’s adversarial optimization going on, the default strategies seem to be ignoring it and hoping it goes away, or letting it develop until it destroys itself, and the exceptional case is one where active countermeasures are taken. This is for a handful of reasons, one of which includes attempting to take such active countermeasures is generally opposed-by-default, unless clear authority or responsibility has been established beforehand.
When it comes to putting views in public it seems to me like posts like the OP or Anna’s post about Vassar do note concerns but they leave the actual meat of the issue unsaid.
Michael Vassar for example spent a good portion of this year in Berlin and I had decisions to make about to what extend I want to try to integrate him into the local community or avoid doing that.
Without the links in the comments I wouldn’t have had a good case for making decisions should ialdabaoth appear in Berlin.
I don’t know to where ialdabaoth went into exil but there’s a good chance that he will interact with other local rationality groups who will have to make decisions and who benefit from getting information.
I think this is an example of what that sort of update looks like. Michael isn’t banned from LessWrong
Interesting that you should mention this. I’ve hugely benefited from collaborating with Michael recently. I think the linked comment is terrible, and I’ve argued with Anna about it several times. I had started drafting a public reply several months ago, but I had set it aside because (a) it’s incredibly emotionally painful to write because I simultaneously owe eternal life-debts of eternal loyalty to both Michael and Anna,[1] and (b) it isn’t even the most important incredibly-emotionally-painful high-community-drama-content piece of writing I have to do. The fact that you seem to take it this seriously suggests that I should prioritize finishing and posting my reply, though I must ask for your patience due to (b).
Like a robot in an Isaac Asimov story forced to choose between injuring a human being or, through inaction, allowing a human being to come to harm, I briefly worried that my behavior isn’t even well-defined in the event of a Michael–Anna conflict. (For the same reason, I assume it’s impossible to take more than one Unbreakable Vow in the world of Harry Potter and the Methods.) Then I remembered that disagreeing with someone’s blog comment isn’t an expression of disloyalty. If I were to write a terrible blog comment (and I’ve written many), then I should be grateful if Anna were to take the time to explain what she thinks I got wrong.
(Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.)
If we’re going to play this frankly puerile game of bringing up who partially inspired what fictional characters, do I at least get to bring up “The Sword of Good”?
The Lord of Dark stared at Hirou as though he were the crazy one. “The Choice between Good and Bad,” said the Lord of Dark in a slow, careful voice, as though explaining something to a child, “is not a matter of saying ‘Good!’ It is about deciding which is which.”
Dolf uttered a single bark of laughter. “You’re mad!” his voice boomed. “Can you truly not know that you are evil? You, the Lord of Dark?”
“Names,” said the Lord of Dark quietly.
[...]
Hirou staggered, and was distantly aware of the Lord of Dark catching him as he fell, to lay him gently on the ground.
In a whisper, Hirou said “Thank you—” and paused.
“My name is Vhazhar.”
“You didn’t trust yourself,” Hirou whispered. “That’s why you had to touch the Sword of Good.”
Hirou felt Vhazhar’s nod, more than seeing it.
The air was darkening, or rather Hirou’s vision was darkening, but there was something terribly important left to say. “The Sword only tests good intentions,” Hirou whispered. “It doesn’t guide your steps. That which empowers a hero does not make us wise—desperation strengthens your hand, but it strikes with equal force in any direction—”
“I’ll be careful,” said the Lord of Dark, the one who had mastered and turned back the darkness. “I won’t trust myself.”
“You are—” Hirou murmured. “Than me, you are—”
I should have known. I should have known from the beginning. I was raised in another world. A world where royal blood is not a license to rule, a world whose wizards do more than sneer from their high towers, a world where life is not so cheap, where justice does not come as a knife in the night, a world where we know that the texture of a race’s skin shouldn’t matter—
And yet for you, born in this world, to question what others took for granted; for you, without ever touching the Sword, to hear the scream that had to be stopped at all costs—
“I don’t trust you either,” Hirou whispered, “but I don’t expect there’s anyone better,” and he closed his eyes until the end of the world.
I confess I don’t know what you’re trying to say here. I have a few vague hypotheses, but none that stand out as particularly likely based on either the quoted text or the context. (E.g. one of them is “remember that something that looks/is called evil, may not be”; but only a small part of the text deals with that, and even if you’d said it explicitly I wouldn’t know why you’d said it. The rest are all on about that level.)
Vaniver mentioned that Michael Vassar was one of the partial inspirations for a supervillain in one of Eliezer Yudkowsky’s works of fiction. I’m saying that, firstly, I don’t think that’s germane in a discussion of moderation policies that aspires to impartiality, even as a playful “Appropriately enough [...]” parenthetical. But secondly, if such things are somehow considered to be relevant, then I want to note that Michael was also the explicit namesake of a morally-good fictional character (“Vhazhar”) in another one of Yudkowsky’s stories.
The fact that the latter story is also about the importance of judging things on their true merits rather than being misled by shallow pattern-matching (e.g., figuing that a “Lord of Dark” must be evil, or using someone’s association with a fictional character to support the idea that they might be worth banning) made it seem worth quoting at length.
Imagine an alternate universe where their local analogue of Ialdabaoth was just as manipulative, wrote almost all the same things about status, power, social reality, &c., but was definitely not guilty of any sex crimes for reasons that had nothing to do with his moral character.
The post is arguing that the the things ialdabaoth writes regarding social dynamics, power, manipulation, etc. are the result of their presumed guilt. In other words, if ialdabaoth had a different fetish, they wouldn’t write the things that they do about social reality, etc. and we wouldn’t even be having this discussion in the first place. The argument, which I’m not sure I endorse, is that a world in which ialdabaoth writes exactly what he writes without being guilty is as logically coherent as a world in which matches don’t light, but cells still use ATP.
I see the argument, but I don’t buy it empirically. Understanding social dynamics, power, manipulation, &c. is useful for acquiring the funds to buy the best statues.
I participated in the LW team discussion about whether to ban, but not in the details of this announcement. I agree that, in your hypothetical, we probably wouldn’tve banned. In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn’tve banned either.
In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn’tve banned either
It seems that the issues with ialdabaoth’s argumentation only appear in comments made after the allegations related to other behaviors of his. Therefore the argument that he is being banned for his epistemic tactics rather than his misconduct is just removing the issue up one step:
Ialdabaoth is being banned for his poor epistemic tactics, not his conduct.
But his epistemic tactics are manipulative because of his conduct.
So the hypothetical where he was accused of sex crimes[1] and people didn’t mind his epistemic tactics isn’t a hypothetical. It was actuality. What we’ve observed is that after the accusations, certain people went from being fine with his epistemic tactics to not fine with them.
Which I don’t believe is actually true. I have read the relevant literature and no post makes an accusation that a crime has been committed, only manipulative sexual behavior. I will hedge this statement by acknowledging that I do not know the full situation.
I mistrusted ialdabaoth from the start, though it’s worth saying that I judged him to be a dangerous manipulator and probable abuser from in-person interactions long before the accusations came out, so it’s not just his LessWrong content.
In any case, I found it impossible to argue on his own terms (not because he’d make decent counterarguments, but because he’d try to corrupt the frame of the conversation instead of making counterarguments). So instead I did things like write this post as a direct rebuttal to something he’d written (maybe on LessWrong, maybe on Facebook) about how honesty and consent were fake concepts used to disguise the power differentials of popularity (which ultimately culminates in an implied “high status people sometimes get away with bad behavior X, so don’t condemn me when I do X”.)
I agree with this point, and it’s what originally motivated this paragraph of the OP:
It does seem important to point out that some of the standards used to assess that risk stem from processing what happened in the wake of the allegations. A brief characterization is that I think the community started to take more seriously not just the question of “will I be better off adopting this idea?” but also the question “will this idea mislead someone else, or does it seem designed to?”. If I had my current standards in 2017, I think they would have sufficed to ban ialdabaoth then, or at least do more to identify the need for arguing against the misleading parts of his ideas.
One nonobvious point from this is that 2017 is well before the accusations were made, but a point at which I think there was sufficient community unease that a consensus could have been built if we had the systems to build that consensus absent accusations.
OK but what’s actually being done is a one-off ban of someone with multiple credible public rape allegations against them. The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that and I don’t see how anyone on the mod team is doing anything best explained by an attempt to solve that problem.
OK but what’s actually being done is a one-off ban of someone with multiple credible public rape allegations against them.
Also, what’s actually being done is a one-off ban of a user whose name starts with ‘i.’ That is, yes, I agree with the facts you present, and contest the claim of relevance / the act of presenting an interpretation as if it were a brute fact.
There is a symmetry to the situation, of course, where I am reporting what I believe my intentions are / the interpretation I was operating under while I made the decision, but no introspective access is perfect, and perhaps there are counterfactuals where our models predict different things and it would have gone the way you predict instead of the way I predict. Even so, I think it would be a mistake to not have the stated motivation as a hypothesis in your model to update towards or against as time goes on.
The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that
According to me, the relevance is that this action was taken to further that policy goal; I agree it is only weak evidence that we will succeed at that goal or even successfully implement policies that work towards that goal. I view this as a declaration of intent, not success, and specifically the intent that “next time, we will act against people who are highly manipulative and deceitful before they have clear victims” instead of the more achievable but less useful “once there’s consensus you committed crimes, not posting on LW is part of your punishment”.
I’m having trouble convincing myself that this is the real reason. Imagine an alternate universe where their local analogue of Ialdabaoth was just as manipulative, wrote almost all the same things about status, power, social reality, &c., but was definitely not guilty of any sex crimes for reasons that had nothing to do with his moral character. (Perhaps imagine him having some kind of exotic sexual orientation that would be satisfied without human contact, like a statue fetish.) Would we really ban such a person on the grounds of manipulative epistemic tactics?
Your “fraudster designs a financial instrument” scenario explains why one should definitely be suspicious of the value of such a user’s contributions—but frankly, I’m suspicious of a lot of people in that way: how do you decide who to ban?
It occurs to me that the “reputation vs. merit” framing completely fails to address the reasons many would assign such bad reputation. (The function of bad reputation is to track things that are actually bad!) Maybe we just have a “moral taste” for not working with people who (are believed to) have committed sufficiently bad crimes, and we’re willing to pay the cost of forgoing positive contributions from them?
If that’s actually the psychology driving the decision, it would be better to fess up to it rather making up a fake reason. Better for the physics journal editor to honestly say, “Look, I just don’t want to accept a paper from a murderer, okay?”, rather than claiming to be impartial and then motivatedly subjecting the paper to whatever isolated demands for rigor were necessary to achieve the appearance of having rejected it on the merits.
One of the big updates that I made over the course of this affair was the value of having a community-wide immune system, rather than being content with not getting sick myself. I think this is an example of what that sort of update looks like. Michael isn’t banned from LessWrong, but also hasn’t posted here in a year in a way that makes that question seem somewhat irrelevant. (Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.) [Edit: I forgot about his more recent account, which is still fairly inactive.] [Edit2: I think it was probably a mistake to write the bits of this paragraph after the first sentence, because the example is unclear and mentioning users in the context of bans can have a chilling effect that I didn’t want to have here.]
So far, it seems like lots of things have been of the form: person (or group) has a mixed reputation, but is widely held in low regard (without the extent of that opinion being common knowledge), the generator of the low regard causes an explosion that makes them an outcast, and then after the fact people go “well, we saw that coming individually but didn’t know how to do anything about it socially.” It would be nice if we knew how to do things about it socially; when this happened a year ago, I made a list of “the next ialdabaoth” and one of the top 3 of that list is at the center of current community drama.
[This seems especially important given that normal coordination mechanisms of this form—gossip, picking up on who’s ‘creepy’ and who isn’t—rely on skills many rationalists don’t have, and sometimes have deliberately decided not to acquire.]
I’d be very interested if you could elaborate on what observations make you think “the community” is doing the kind of information-processing that would result in “immune system” norms actually building accurate maps, rather than accelerating our decline into a cult.
It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]
In contrast, the conjunction of the “immune system” metaphor and your mention of Anna’s comment about Michael makes me imagine social norms that make it easier for high-ranking community members to silence potential rivals or whistleblowers by declaring them to be bad thinkers and therefore not worth listening to.
That is, I perceive a huge difference between, “Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I’m joining the coalition for ostracizing X” (analogous to a court) vs. “The mods declared that X uses manipulative epistemic tactics, therefore I’m going to copy that ‘antibody’ and not listen to anything X says” (analogous to an immune system).
But, maybe I’m completely misunderstanding what you meant by “immune system”? It would be great if you could clarify what you’re thinking here.
It would certainly be nice to have a distributed intellectual authority I could trust. I can imagine that such a thing could exist. But painful personal experience has me quite convinced that, under present conditions, there really is just no substitute for thinking for yourself (“not getting sick [one]self”).
Thanks to Michael Vassar for teaching me about the historical importance of courts!
I think the effects of that (on my beliefs, at least) were indirect. The accusations themselves didn’t move me very much, but caused a number of private and semi-public info-sharing conversations that did move me substantially.
I do want to stress the ways in which the exile of Ialdabaoth does not match my standards for courts (altho I agree it is analogous). The main issue, in my mind at least, is that no one had the clear mandate within the community to ‘try the case’, and those that stepped forward didn’t have broader social recognition of even their limited mandate. (No one could sue a judge or jury for libel if they found ialdabaoth guilty, but the panels that gathered evidence could be sued for libel for publishing their views on ialdabaoth.) And this is before we get to the way in which ‘the case’ was tried in multiple places with varying levels of buy-in from the parties involved.
The thing that’s missing, in my mind, is the way in which antibodies get developed and amplified. That is, I’m less concerned with people deciding whether or not to copy a view, and more concerned with the view being put in public in the first place. My sense is that, by default, people rarely publicly share their worries about other people, and this gets worse instead of better if they suspect the person in question is adversarial. (If I think Bob is doing shady things, including silencing his enemies, this makes it harder to ask people what they think of Bob, whereas if Carol is generally incompetent and annoying, this makes it easier to ask people what they think of Carol.)
If you suspect there’s adversarial optimization going on, the default strategies seem to be ignoring it and hoping it goes away, or letting it develop until it destroys itself, and the exceptional case is one where active countermeasures are taken. This is for a handful of reasons, one of which includes attempting to take such active countermeasures is generally opposed-by-default, unless clear authority or responsibility has been established beforehand.
When it comes to putting views in public it seems to me like posts like the OP or Anna’s post about Vassar do note concerns but they leave the actual meat of the issue unsaid.
Michael Vassar for example spent a good portion of this year in Berlin and I had decisions to make about to what extend I want to try to integrate him into the local community or avoid doing that.
Without the links in the comments I wouldn’t have had a good case for making decisions should ialdabaoth appear in Berlin.
I don’t know to where ialdabaoth went into exil but there’s a good chance that he will interact with other local rationality groups who will have to make decisions and who benefit from getting information.
Interesting that you should mention this. I’ve hugely benefited from collaborating with Michael recently. I think the linked comment is terrible, and I’ve argued with Anna about it several times. I had started drafting a public reply several months ago, but I had set it aside because (a) it’s incredibly emotionally painful to write because I simultaneously owe eternal life-debts of eternal loyalty to both Michael and Anna,[1] and (b) it isn’t even the most important incredibly-emotionally-painful high-community-drama-content piece of writing I have to do. The fact that you seem to take it this seriously suggests that I should prioritize finishing and posting my reply, though I must ask for your patience due to (b).
Like a robot in an Isaac Asimov story forced to choose between injuring a human being or, through inaction, allowing a human being to come to harm, I briefly worried that my behavior isn’t even well-defined in the event of a Michael–Anna conflict. (For the same reason, I assume it’s impossible to take more than one Unbreakable Vow in the world of Harry Potter and the Methods.) Then I remembered that disagreeing with someone’s blog comment isn’t an expression of disloyalty. If I were to write a terrible blog comment (and I’ve written many), then I should be grateful if Anna were to take the time to explain what she thinks I got wrong.
You know, this is a really lame cheap shot—
If we’re going to play this frankly puerile game of bringing up who partially inspired what fictional characters, do I at least get to bring up “The Sword of Good”?
I confess I don’t know what you’re trying to say here. I have a few vague hypotheses, but none that stand out as particularly likely based on either the quoted text or the context. (E.g. one of them is “remember that something that looks/is called evil, may not be”; but only a small part of the text deals with that, and even if you’d said it explicitly I wouldn’t know why you’d said it. The rest are all on about that level.)
Vaniver mentioned that Michael Vassar was one of the partial inspirations for a supervillain in one of Eliezer Yudkowsky’s works of fiction. I’m saying that, firstly, I don’t think that’s germane in a discussion of moderation policies that aspires to impartiality, even as a playful “Appropriately enough [...]” parenthetical. But secondly, if such things are somehow considered to be relevant, then I want to note that Michael was also the explicit namesake of a morally-good fictional character (“Vhazhar”) in another one of Yudkowsky’s stories.
The fact that the latter story is also about the importance of judging things on their true merits rather than being misled by shallow pattern-matching (e.g., figuing that a “Lord of Dark” must be evil, or using someone’s association with a fictional character to support the idea that they might be worth banning) made it seem worth quoting at length.
He seems to have a different account with more recent contributions.
Thanks, fixed.
The post is arguing that the the things ialdabaoth writes regarding social dynamics, power, manipulation, etc. are the result of their presumed guilt. In other words, if ialdabaoth had a different fetish, they wouldn’t write the things that they do about social reality, etc. and we wouldn’t even be having this discussion in the first place. The argument, which I’m not sure I endorse, is that a world in which ialdabaoth writes exactly what he writes without being guilty is as logically coherent as a world in which matches don’t light, but cells still use ATP.
I see the argument, but I don’t buy it empirically. Understanding social dynamics, power, manipulation, &c. is useful for acquiring the funds to buy the best statues.
I participated in the LW team discussion about whether to ban, but not in the details of this announcement. I agree that, in your hypothetical, we probably wouldn’tve banned. In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn’tve banned either.
It seems that the issues with ialdabaoth’s argumentation only appear in comments made after the allegations related to other behaviors of his. Therefore the argument that he is being banned for his epistemic tactics rather than his misconduct is just removing the issue up one step:
So the hypothetical where he was accused of sex crimes[1] and people didn’t mind his epistemic tactics isn’t a hypothetical. It was actuality. What we’ve observed is that after the accusations, certain people went from being fine with his epistemic tactics to not fine with them.
Which I don’t believe is actually true. I have read the relevant literature and no post makes an accusation that a crime has been committed, only manipulative sexual behavior. I will hedge this statement by acknowledging that I do not know the full situation.
I mistrusted ialdabaoth from the start, though it’s worth saying that I judged him to be a dangerous manipulator and probable abuser from in-person interactions long before the accusations came out, so it’s not just his LessWrong content.
In any case, I found it impossible to argue on his own terms (not because he’d make decent counterarguments, but because he’d try to corrupt the frame of the conversation instead of making counterarguments). So instead I did things like write this post as a direct rebuttal to something he’d written (maybe on LessWrong, maybe on Facebook) about how honesty and consent were fake concepts used to disguise the power differentials of popularity (which ultimately culminates in an implied “high status people sometimes get away with bad behavior X, so don’t condemn me when I do X”.)
I agree with this point, and it’s what originally motivated this paragraph of the OP:
One nonobvious point from this is that 2017 is well before the accusations were made, but a point at which I think there was sufficient community unease that a consensus could have been built if we had the systems to build that consensus absent accusations.
OK but what’s actually being done is a one-off ban of someone with multiple credible public rape allegations against them. The specific policy goal of developing better immune responses to epistemic corruption is just not relevant to that and I don’t see how anyone on the mod team is doing anything best explained by an attempt to solve that problem.
Also, what’s actually being done is a one-off ban of a user whose name starts with ‘i.’ That is, yes, I agree with the facts you present, and contest the claim of relevance / the act of presenting an interpretation as if it were a brute fact.
There is a symmetry to the situation, of course, where I am reporting what I believe my intentions are / the interpretation I was operating under while I made the decision, but no introspective access is perfect, and perhaps there are counterfactuals where our models predict different things and it would have gone the way you predict instead of the way I predict. Even so, I think it would be a mistake to not have the stated motivation as a hypothesis in your model to update towards or against as time goes on.
According to me, the relevance is that this action was taken to further that policy goal; I agree it is only weak evidence that we will succeed at that goal or even successfully implement policies that work towards that goal. I view this as a declaration of intent, not success, and specifically the intent that “next time, we will act against people who are highly manipulative and deceitful before they have clear victims” instead of the more achievable but less useful “once there’s consensus you committed crimes, not posting on LW is part of your punishment”.