I currently know of no person who would describe ialdabaoth as a good influence
“Good influence” in what context? I remember finding some of his Facebook posts/comments insightful, and I’m glad I read them. It shouldn’t be surprising that someone could have some real insights (and thereby constitute a “good influence” in the capacity of social-studies blogging), while also doing a lot of bad things (and thereby constituting a very bad influence in the capacity of being a real-life community member), even if the content of the insights is obviously related to the bad things. (Doing bad things and getting away with them for a long time requires skills that might also lend themselves to good social-studies blogging.)
Less Wrong is in the awkward position of being a public website (anyone can submit blog posts about rationality under a made-up name), and also being closely associated with a real-life community with dense social ties, group houses, money, &c. If our actual moderation algorithm is, “Ban people who have have been justly ostracized from the real-life community as part of their punishment, even if their blog comments were otherwise OK”, that’s fine, but we shouldn’t delude ourselves about what the algorithm is.
I remember finding some of his Facebook posts/comments insightful, and I’m glad I read them.
If you learned that someone in the rationality community had taken on ialdabaoth as a master (like in the context of zen, or a PhD advisor, or so on), would you expect them to grow in good directions or bad directions? [Ideally, from the epistemic state you were in ~2 years ago, rather than the epistemic state you’re in now.]
I acknowledge that this is quite different from the “would you ever appreciate coming across their thoughts?” question; as it happens, I upvoted Affordance Widths when I first saw it, because it was a neat simple presentation of a model of privilege, and wasn’t taking responsibility for his collected works or thinking about how it fit into them. I mistakenly typical-minded on which parts of his work people were listening to, and which they were safely ignoring.
If our actual moderation algorithm is, “Ban people who have have been justly ostracized from the real-life community as part of their punishment, even if their blog comments were otherwise OK”, that’s fine, but we shouldn’t delude ourselves about what the algorithm is.
I agree that algorithm could be fine, for versions of LessWrong that are focused primarily on the community instead of on epistemic progress, but am trying to help instantiate the version of LessWrong that is primarily focused on epistemic progress.
If you learned that someone in the rationality community had taken on ialdabaoth as a master (like in the context of zen, or a PhD advisor, or so on), would you expect them to grow in good directions or bad directions?
Bad directions. The problem is that I also think this of other users who we aren’t banning, which suggests that our standard for “allowed to post on Less Wrong” is lower than “would be a good master.”
[Ideally, from the epistemic state you were in ~2 years ago, rather than the epistemic state you’re in now.]
Okay, right, I’m much less sure that I would have said “bad directions” as confidently 2 years ago.
am trying to help instantiate the version of LessWrong that is primarily focused on epistemic progress.
Thanks for this!! (I think I’m much more worried than you about the failure mode where something claiming to make intellectual progress is actually doing something else, which makes me more willing to tolerate pragamatic concessions of principle that are explicitly marked as such, when I’m worried that the alternative is the concession being made anyway with a fake rationale attached.)
The problem is that I also think this of other users who we aren’t banning, which suggests that our standard for “allowed to post on Less Wrong” is lower than “would be a good master.”
I interpreted ialdabaoth as trying to be a master, in a way that I do not interpret most of the people who fail my “would be a good master” check. (Most of them are just not skilled in that sort of thing and don’t seek it out.) If there are people you think would both predictably mislead their students and appear to be trying to recruit students, I’m interested in hearing about it.
I think I’m much more worried than you about the failure mode where something claiming to make intellectual progress is actually doing something else
This is possible; my suspicion is that we have similar levels of dispreference for that and different models of how attempts to make intellectual progress go astray. I try to be upfront about when I make pragmatic concessions, as I think you’ve have some evidence of, in part so that when I am not making such a concession it’s more believable. [Of course, for someone without observations of that sort of thing, I don’t expect me claiming that I do it to be much evidence.]
“Good influence” in what context? I remember finding some of his Facebook posts/comments insightful, and I’m glad I read them. It shouldn’t be surprising that someone could have some real insights (and thereby constitute a “good influence” in the capacity of social-studies blogging), while also doing a lot of bad things (and thereby constituting a very bad influence in the capacity of being a real-life community member), even if the content of the insights is obviously related to the bad things. (Doing bad things and getting away with them for a long time requires skills that might also lend themselves to good social-studies blogging.)
Less Wrong is in the awkward position of being a public website (anyone can submit blog posts about rationality under a made-up name), and also being closely associated with a real-life community with dense social ties, group houses, money, &c. If our actual moderation algorithm is, “Ban people who have have been justly ostracized from the real-life community as part of their punishment, even if their blog comments were otherwise OK”, that’s fine, but we shouldn’t delude ourselves about what the algorithm is.
If you learned that someone in the rationality community had taken on ialdabaoth as a master (like in the context of zen, or a PhD advisor, or so on), would you expect them to grow in good directions or bad directions? [Ideally, from the epistemic state you were in ~2 years ago, rather than the epistemic state you’re in now.]
I acknowledge that this is quite different from the “would you ever appreciate coming across their thoughts?” question; as it happens, I upvoted Affordance Widths when I first saw it, because it was a neat simple presentation of a model of privilege, and wasn’t taking responsibility for his collected works or thinking about how it fit into them. I mistakenly typical-minded on which parts of his work people were listening to, and which they were safely ignoring.
I agree that algorithm could be fine, for versions of LessWrong that are focused primarily on the community instead of on epistemic progress, but am trying to help instantiate the version of LessWrong that is primarily focused on epistemic progress.
Bad directions. The problem is that I also think this of other users who we aren’t banning, which suggests that our standard for “allowed to post on Less Wrong” is lower than “would be a good master.”
Okay, right, I’m much less sure that I would have said “bad directions” as confidently 2 years ago.
Thanks for this!! (I think I’m much more worried than you about the failure mode where something claiming to make intellectual progress is actually doing something else, which makes me more willing to tolerate pragamatic concessions of principle that are explicitly marked as such, when I’m worried that the alternative is the concession being made anyway with a fake rationale attached.)
I interpreted ialdabaoth as trying to be a master, in a way that I do not interpret most of the people who fail my “would be a good master” check. (Most of them are just not skilled in that sort of thing and don’t seek it out.) If there are people you think would both predictably mislead their students and appear to be trying to recruit students, I’m interested in hearing about it.
This is possible; my suspicion is that we have similar levels of dispreference for that and different models of how attempts to make intellectual progress go astray. I try to be upfront about when I make pragmatic concessions, as I think you’ve have some evidence of, in part so that when I am not making such a concession it’s more believable. [Of course, for someone without observations of that sort of thing, I don’t expect me claiming that I do it to be much evidence.]