Would we really ban such a person on the grounds of manipulative epistemic tactics?
One of the big updates that I made over the course of this affair was the value of having a community-wide immune system, rather than being content with not getting sick myself. I think this is an example of what that sort of update looks like. Michael isn’t banned from LessWrong, but also hasn’t posted here in a year in a way that makes that question seem somewhat irrelevant. (Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.) [Edit: I forgot about his more recent account, which is still fairly inactive.] [Edit2: I think it was probably a mistake to write the bits of this paragraph after the first sentence, because the example is unclear and mentioning users in the context of bans can have a chilling effect that I didn’t want to have here.]
So far, it seems like lots of things have been of the form: person (or group) has a mixed reputation, but is widely held in low regard (without the extent of that opinion being common knowledge), the generator of the low regard causes an explosion that makes them an outcast, and then after the fact people go “well, we saw that coming individually but didn’t know how to do anything about it socially.” It would be nice if we knew how to do things about it socially; when this happened a year ago, I made a list of “the next ialdabaoth” and one of the top 3 of that list is at the center of current community drama.
[This seems especially important given that normal coordination mechanisms of this form—gossip, picking up on who’s ‘creepy’ and who isn’t—rely on skills many rationalists don’t have, and sometimes have deliberately decided not to acquire.]
the value of having a community-wide immune system, rather than being content with not getting sick myself
I’d be very interested if you could elaborate on what observations make you think “the community” is doing the kind of information-processing that would result in “immune system” norms actually building accurate maps, rather than accelerating our decline into a cult.
It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]
In contrast, the conjunction of the “immune system” metaphor and your mention of Anna’s comment about Michael makes me imagine social norms that make it easier for high-ranking community members to silence potential rivals or whistleblowers by declaring them to be bad thinkers and therefore not worth listening to.
That is, I perceive a huge difference between, “Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I’m joining the coalition for ostracizing X” (analogous to a court) vs. “The mods declared that X uses manipulative epistemic tactics, therefore I’m going to copy that ‘antibody’ and not listen to anything X says” (analogous to an immune system).
But, maybe I’m completely misunderstanding what you meant by “immune system”? It would be great if you could clarify what you’re thinking here.
It would certainly be nice to have a distributed intellectual authority I could trust. I can imagine that such a thing could exist. But painful personal experience has me quite convinced that, under present conditions, there really is just no substitute for thinking for yourself (“not getting sick [one]self”).
It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]
I think the effects of that (on my beliefs, at least) were indirect. The accusations themselves didn’t move me very much, but caused a number of private and semi-public info-sharing conversations that did move me substantially.
That is, I perceive a huge difference between, “Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I’m joining the coalition for ostracizing X” (analogous to a court)
I do want to stress the ways in which the exile of Ialdabaoth does not match my standards for courts (altho I agree it is analogous). The main issue, in my mind at least, is that no one had the clear mandate within the community to ‘try the case’, and those that stepped forward didn’t have broader social recognition of even their limited mandate. (No one could sue a judge or jury for libel if they found ialdabaoth guilty, but the panels that gathered evidence could be sued for libel for publishing their views on ialdabaoth.) And this is before we get to the way in which ‘the case’ was tried in multiple places with varying levels of buy-in from the parties involved.
But, maybe I’m completely misunderstanding what you meant by “immune system”? It would be great if you could clarify what you’re thinking here.
The thing that’s missing, in my mind, is the way in which antibodies get developed and amplified. That is, I’m less concerned with people deciding whether or not to copy a view, and more concerned with the view being put in public in the first place. My sense is that, by default, people rarely publicly share their worries about other people, and this gets worse instead of better if they suspect the person in question is adversarial. (If I think Bob is doing shady things, including silencing his enemies, this makes it harder to ask people what they think of Bob, whereas if Carol is generally incompetent and annoying, this makes it easier to ask people what they think of Carol.)
If you suspect there’s adversarial optimization going on, the default strategies seem to be ignoring it and hoping it goes away, or letting it develop until it destroys itself, and the exceptional case is one where active countermeasures are taken. This is for a handful of reasons, one of which includes attempting to take such active countermeasures is generally opposed-by-default, unless clear authority or responsibility has been established beforehand.
When it comes to putting views in public it seems to me like posts like the OP or Anna’s post about Vassar do note concerns but they leave the actual meat of the issue unsaid.
Michael Vassar for example spent a good portion of this year in Berlin and I had decisions to make about to what extend I want to try to integrate him into the local community or avoid doing that.
Without the links in the comments I wouldn’t have had a good case for making decisions should ialdabaoth appear in Berlin.
I don’t know to where ialdabaoth went into exil but there’s a good chance that he will interact with other local rationality groups who will have to make decisions and who benefit from getting information.
I think this is an example of what that sort of update looks like. Michael isn’t banned from LessWrong
Interesting that you should mention this. I’ve hugely benefited from collaborating with Michael recently. I think the linked comment is terrible, and I’ve argued with Anna about it several times. I had started drafting a public reply several months ago, but I had set it aside because (a) it’s incredibly emotionally painful to write because I simultaneously owe eternal life-debts of eternal loyalty to both Michael and Anna,[1] and (b) it isn’t even the most important incredibly-emotionally-painful high-community-drama-content piece of writing I have to do. The fact that you seem to take it this seriously suggests that I should prioritize finishing and posting my reply, though I must ask for your patience due to (b).
Like a robot in an Isaac Asimov story forced to choose between injuring a human being or, through inaction, allowing a human being to come to harm, I briefly worried that my behavior isn’t even well-defined in the event of a Michael–Anna conflict. (For the same reason, I assume it’s impossible to take more than one Unbreakable Vow in the world of Harry Potter and the Methods.) Then I remembered that disagreeing with someone’s blog comment isn’t an expression of disloyalty. If I were to write a terrible blog comment (and I’ve written many), then I should be grateful if Anna were to take the time to explain what she thinks I got wrong.
(Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.)
If we’re going to play this frankly puerile game of bringing up who partially inspired what fictional characters, do I at least get to bring up “The Sword of Good”?
The Lord of Dark stared at Hirou as though he were the crazy one. “The Choice between Good and Bad,” said the Lord of Dark in a slow, careful voice, as though explaining something to a child, “is not a matter of saying ‘Good!’ It is about deciding which is which.”
Dolf uttered a single bark of laughter. “You’re mad!” his voice boomed. “Can you truly not know that you are evil? You, the Lord of Dark?”
“Names,” said the Lord of Dark quietly.
[...]
Hirou staggered, and was distantly aware of the Lord of Dark catching him as he fell, to lay him gently on the ground.
In a whisper, Hirou said “Thank you—” and paused.
“My name is Vhazhar.”
“You didn’t trust yourself,” Hirou whispered. “That’s why you had to touch the Sword of Good.”
Hirou felt Vhazhar’s nod, more than seeing it.
The air was darkening, or rather Hirou’s vision was darkening, but there was something terribly important left to say. “The Sword only tests good intentions,” Hirou whispered. “It doesn’t guide your steps. That which empowers a hero does not make us wise—desperation strengthens your hand, but it strikes with equal force in any direction—”
“I’ll be careful,” said the Lord of Dark, the one who had mastered and turned back the darkness. “I won’t trust myself.”
“You are—” Hirou murmured. “Than me, you are—”
I should have known. I should have known from the beginning. I was raised in another world. A world where royal blood is not a license to rule, a world whose wizards do more than sneer from their high towers, a world where life is not so cheap, where justice does not come as a knife in the night, a world where we know that the texture of a race’s skin shouldn’t matter—
And yet for you, born in this world, to question what others took for granted; for you, without ever touching the Sword, to hear the scream that had to be stopped at all costs—
“I don’t trust you either,” Hirou whispered, “but I don’t expect there’s anyone better,” and he closed his eyes until the end of the world.
I confess I don’t know what you’re trying to say here. I have a few vague hypotheses, but none that stand out as particularly likely based on either the quoted text or the context. (E.g. one of them is “remember that something that looks/is called evil, may not be”; but only a small part of the text deals with that, and even if you’d said it explicitly I wouldn’t know why you’d said it. The rest are all on about that level.)
Vaniver mentioned that Michael Vassar was one of the partial inspirations for a supervillain in one of Eliezer Yudkowsky’s works of fiction. I’m saying that, firstly, I don’t think that’s germane in a discussion of moderation policies that aspires to impartiality, even as a playful “Appropriately enough [...]” parenthetical. But secondly, if such things are somehow considered to be relevant, then I want to note that Michael was also the explicit namesake of a morally-good fictional character (“Vhazhar”) in another one of Yudkowsky’s stories.
The fact that the latter story is also about the importance of judging things on their true merits rather than being misled by shallow pattern-matching (e.g., figuing that a “Lord of Dark” must be evil, or using someone’s association with a fictional character to support the idea that they might be worth banning) made it seem worth quoting at length.
One of the big updates that I made over the course of this affair was the value of having a community-wide immune system, rather than being content with not getting sick myself. I think this is an example of what that sort of update looks like. Michael isn’t banned from LessWrong, but also hasn’t posted here in a year in a way that makes that question seem somewhat irrelevant. (Appropriately enough, his last comment was about suggesting that someone who is losing mental ground to their model of HPMOR!Quirrell talk to him to get the decision procedures from the source.) [Edit: I forgot about his more recent account, which is still fairly inactive.] [Edit2: I think it was probably a mistake to write the bits of this paragraph after the first sentence, because the example is unclear and mentioning users in the context of bans can have a chilling effect that I didn’t want to have here.]
So far, it seems like lots of things have been of the form: person (or group) has a mixed reputation, but is widely held in low regard (without the extent of that opinion being common knowledge), the generator of the low regard causes an explosion that makes them an outcast, and then after the fact people go “well, we saw that coming individually but didn’t know how to do anything about it socially.” It would be nice if we knew how to do things about it socially; when this happened a year ago, I made a list of “the next ialdabaoth” and one of the top 3 of that list is at the center of current community drama.
[This seems especially important given that normal coordination mechanisms of this form—gossip, picking up on who’s ‘creepy’ and who isn’t—rely on skills many rationalists don’t have, and sometimes have deliberately decided not to acquire.]
I’d be very interested if you could elaborate on what observations make you think “the community” is doing the kind of information-processing that would result in “immune system” norms actually building accurate maps, rather than accelerating our decline into a cult.
It seems to me that what actually helped build common knowledge in the Ialdabaoth case was the victims posting their specific stories online, serving a role analogous to transcripts of witness testimony in court.[1]
In contrast, the conjunction of the “immune system” metaphor and your mention of Anna’s comment about Michael makes me imagine social norms that make it easier for high-ranking community members to silence potential rivals or whistleblowers by declaring them to be bad thinkers and therefore not worth listening to.
That is, I perceive a huge difference between, “Witnesses A, B, and C testified that X commited a serious crime and no exculpatory evidence has emerged, therefore I’m joining the coalition for ostracizing X” (analogous to a court) vs. “The mods declared that X uses manipulative epistemic tactics, therefore I’m going to copy that ‘antibody’ and not listen to anything X says” (analogous to an immune system).
But, maybe I’m completely misunderstanding what you meant by “immune system”? It would be great if you could clarify what you’re thinking here.
It would certainly be nice to have a distributed intellectual authority I could trust. I can imagine that such a thing could exist. But painful personal experience has me quite convinced that, under present conditions, there really is just no substitute for thinking for yourself (“not getting sick [one]self”).
Thanks to Michael Vassar for teaching me about the historical importance of courts!
I think the effects of that (on my beliefs, at least) were indirect. The accusations themselves didn’t move me very much, but caused a number of private and semi-public info-sharing conversations that did move me substantially.
I do want to stress the ways in which the exile of Ialdabaoth does not match my standards for courts (altho I agree it is analogous). The main issue, in my mind at least, is that no one had the clear mandate within the community to ‘try the case’, and those that stepped forward didn’t have broader social recognition of even their limited mandate. (No one could sue a judge or jury for libel if they found ialdabaoth guilty, but the panels that gathered evidence could be sued for libel for publishing their views on ialdabaoth.) And this is before we get to the way in which ‘the case’ was tried in multiple places with varying levels of buy-in from the parties involved.
The thing that’s missing, in my mind, is the way in which antibodies get developed and amplified. That is, I’m less concerned with people deciding whether or not to copy a view, and more concerned with the view being put in public in the first place. My sense is that, by default, people rarely publicly share their worries about other people, and this gets worse instead of better if they suspect the person in question is adversarial. (If I think Bob is doing shady things, including silencing his enemies, this makes it harder to ask people what they think of Bob, whereas if Carol is generally incompetent and annoying, this makes it easier to ask people what they think of Carol.)
If you suspect there’s adversarial optimization going on, the default strategies seem to be ignoring it and hoping it goes away, or letting it develop until it destroys itself, and the exceptional case is one where active countermeasures are taken. This is for a handful of reasons, one of which includes attempting to take such active countermeasures is generally opposed-by-default, unless clear authority or responsibility has been established beforehand.
When it comes to putting views in public it seems to me like posts like the OP or Anna’s post about Vassar do note concerns but they leave the actual meat of the issue unsaid.
Michael Vassar for example spent a good portion of this year in Berlin and I had decisions to make about to what extend I want to try to integrate him into the local community or avoid doing that.
Without the links in the comments I wouldn’t have had a good case for making decisions should ialdabaoth appear in Berlin.
I don’t know to where ialdabaoth went into exil but there’s a good chance that he will interact with other local rationality groups who will have to make decisions and who benefit from getting information.
Interesting that you should mention this. I’ve hugely benefited from collaborating with Michael recently. I think the linked comment is terrible, and I’ve argued with Anna about it several times. I had started drafting a public reply several months ago, but I had set it aside because (a) it’s incredibly emotionally painful to write because I simultaneously owe eternal life-debts of eternal loyalty to both Michael and Anna,[1] and (b) it isn’t even the most important incredibly-emotionally-painful high-community-drama-content piece of writing I have to do. The fact that you seem to take it this seriously suggests that I should prioritize finishing and posting my reply, though I must ask for your patience due to (b).
Like a robot in an Isaac Asimov story forced to choose between injuring a human being or, through inaction, allowing a human being to come to harm, I briefly worried that my behavior isn’t even well-defined in the event of a Michael–Anna conflict. (For the same reason, I assume it’s impossible to take more than one Unbreakable Vow in the world of Harry Potter and the Methods.) Then I remembered that disagreeing with someone’s blog comment isn’t an expression of disloyalty. If I were to write a terrible blog comment (and I’ve written many), then I should be grateful if Anna were to take the time to explain what she thinks I got wrong.
You know, this is a really lame cheap shot—
If we’re going to play this frankly puerile game of bringing up who partially inspired what fictional characters, do I at least get to bring up “The Sword of Good”?
I confess I don’t know what you’re trying to say here. I have a few vague hypotheses, but none that stand out as particularly likely based on either the quoted text or the context. (E.g. one of them is “remember that something that looks/is called evil, may not be”; but only a small part of the text deals with that, and even if you’d said it explicitly I wouldn’t know why you’d said it. The rest are all on about that level.)
Vaniver mentioned that Michael Vassar was one of the partial inspirations for a supervillain in one of Eliezer Yudkowsky’s works of fiction. I’m saying that, firstly, I don’t think that’s germane in a discussion of moderation policies that aspires to impartiality, even as a playful “Appropriately enough [...]” parenthetical. But secondly, if such things are somehow considered to be relevant, then I want to note that Michael was also the explicit namesake of a morally-good fictional character (“Vhazhar”) in another one of Yudkowsky’s stories.
The fact that the latter story is also about the importance of judging things on their true merits rather than being misled by shallow pattern-matching (e.g., figuing that a “Lord of Dark” must be evil, or using someone’s association with a fictional character to support the idea that they might be worth banning) made it seem worth quoting at length.
He seems to have a different account with more recent contributions.
Thanks, fixed.