I would probably be accused of being part of the sorts of counterproductive criticism you are talking about in your post. I am trying to think whether I have ever faced the sort of criticism myself on LessWrong. I’ve written quite a few posts that have received a significant amount of attention, e.g.:
A kind of unfortunate thing about Zack’s response is that he follows the LessWrong taboo on politically charged examples, and that makes it unclear what kinds of disagreements he has in mind in his post. To familiarize yourself with it, you can take a look at Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda, but the TL;DR is that leaders in the rationalist community have often been acting in bad faith wrt. trans issues.
This sort of situation makes calls for assumptions of good faith a threat to Zack, because such calls undermine creating common knowledge about what situation he is in. If you want to be able to make calls for good faith assumptions without triggering Zack, I think the best approach would be to restore good faith by pushing rationalist leaders to make amends to Zack if you have any leverage within the rationalist community. (After all, we certainly can’t assume good faith when there’s clear bad faith going on.) And to come up with structural solutions to prevent it from happening again.
In the first, you take a place where someone has cited very specific numbers, and ask them to (basically) say where those numbers came from. Then, you note note a problem with the claim “if [socialist] then [more productive]” by pointing out that socialist firms haven’t come to dominate the economy.
In the second, you accommodate the other user’s request that you target your spot-check to co-ops, note that your spot-check led you to the opposite conclusion of the other user, and also note a confusion about whether the cited study is even coherent.
I think this is extremely standard, central LW skepticism in its healthy form.
Some things those comments do not do:
Attempt to set the frame of the discussion to something like ~”obviously this objection I have raised is fundamentally a defeater of your whole point; if you do not answer it satisfactorily then your thesis has been debunked,” which is a thing that Socrati tend to do to a pretty substantial degree.
Leap to conclusions about what the author meant, or what their point is tantamount to, and then run off toward the horizon without checking back in. You’re just straightforwardly expressing confusions and curiosities, not putting words in the author’s mouth.
Create a large burden-of-response upon the author. They’re pretty atomic, simple points; each could be replied to without the author having to do the equivalent of writing a whole new essay.
Cause a tree-explosion where, like, the response to Objection A generates Objections B and C, and then the response to Objection B generates Objections D and E, and then the response to Objection C generates Objections F, G, and H, all without a) consideration for whether the other person intends to go that deep, or b) consideration for whether the Objections are anywhere near the crux of the issue.
I apologize; this comment is sort of dashed off and I would’ve liked to have an additional spoon for it; I don’t think I’ve been exhaustive or thorough. But hopefully this gives at least a little more detail.
I think that sometimes it is true that you and a conversational partner will be in a situation where it really actually seems like the highest-probability hypothesis, by a high margin, is that they can’t explain because their point has no merit.
I think one arrives at such a high probability on such a sad hypothesis due to specific kinds of evidence.
I think that often, people are overconfident that that’s what’s going on, and undercounting hypotheses like “the person wants to have the conversation at a different level of meta” or “they just do not have the time to reply to each of ten different commenters who are highly motivated to spill words” or “the person asking for the explanation is not the kind of person the author is trying to bridge with or convince.”
(This is near the root of my issues with Said—I don’t find Said’s confusions at all predictive or diagnostic of confusion-among-my-actual-audience; Said seems to think that his own lack of understanding means something about the point being confused or insufficiently explained or justified, and I often simply disagree, and think that it’s a him-problem.)
I think that it’s good to usually leave space for people, as a matter of policy—to try not to leave the impression that you think (and thus others should think) that a failure to respond adequately is damning, except where you actually for-real think that a failure to respond adequately is genuinely damning.
Like, it’s important not to whip that social weapon out willy-nilly, because then it’s hard to tell when no, we really mean it, if they can’t answer this question we should actually throw out their claims.
Currently, the environment of LW feels, to me, supersaturated with that sort of implication in a way that robs the genuine examples of their power, similar to how if people refer to every little microaggression as “racism” then calling out an actual racist being actually macro-racist becomes much harder to do.
I don’t think it’s bad or off-limits to say that if someone can’t answer X, their point is invalid, but I think we should reserve that for when it’s both justified and actually seriously meant.
I think this is extremely standard, central LW skepticism in its healthy form.
Some things those comments do not do: [...]
I think that’s a very interesting list of points. I didn’t like the essay at all, and the message didn’t feel right to me, but this post right here makes me a lot more sympathetic to it.
(Which is kind of ironic; you say this comment is dashed off, and you presumably spent a lot more time on the essay; but I’d argue the comment conveys a lot more useful information.)
I mean you’re not 100% wrong that the dissatisfaction indicates that there’s some sort of problem somewhere. I think my counterarguments are also not 100% wrong though.
Maybe the issue is that we are both jumping to conclusions without fully mapping out the underlying issues. Like I mostly don’t know which people have gotten run off LessWrong, nor which interactions caused them to get run off. Maybe a list of examples could help clarify why their experience differs from mine, as well as clarify whether there’s any alternatives to “less LessWrong critics” and “less LessWrong authors” to solve the problem (e.g. maybe standardizing some type of confidence indicators or something could help, hypothetically).
(Though that’s the issue, right? Your point is that mapping out the issue is too much work. So it’s not gonna get done. Maybe the Lightcone LW team could sponsor the mapping? Idk.)
I think there is a simple solution: the people who are currently getting quietly pissed at the Socrati, or who are sucking it up and tolerating them, stop doing so. They start criticizing the criticism, downvoting hard, upvoting the non-Socrati just to correct for the negativity drip, and banning the most prolific Socrati from commenting on their posts.
Instead of laboriously figuring out whether a problem exists, the people for whom Socrati are a problem can use the tools at their disposal to fight back/insulate themselves from worthless and degrading criticism.
Or the Socrati could, you know, listen to the complaints and adjust their behavior. But I don’t see that or some sort of LW research into the true feelings of the frustrated expats actually happening, and I think the best alternative is just to have people like Duncan who are frustrated with the situation just being a lot more vocal and active.
One reason I want more people to know about and use the ban-from-my-post feature is so the mod team* can notice patterns in who gets banned by individuals. Lots of people are disinclined to complain, especially if they believe Socrates is the site culture, censuses of people who quietly left are effort intensive and almost impossible to make representative, but own-post banning is an honest signal that aligns with people’s natural incentives.
*Technically I’m on the mod team but in practice only participate in discussions, rather than take direct action myself.
I think my claim is more “there are genuine competing access needs between Socrati and Athenians, they fundamentally cannot both get all of what they want,” + a personal advocating-for-having-the-Athenians-win, à la “let’s have a well-kept garden.”
I mean that’s your conclusion. But your claim is that the underlying problem/constraint that generates this conclusion is that the Socrati raise a bunch of doubts about stuff and it’s too much work for the Athenians to address those doubts.
Like you say the Athenians have the need for trust so they can build things bigger than themselves, and the Socrati have the need for being allowed to freely critique so they can siphon status by taking cheap shots at the Athenians. And these needs conflict because the cheap shots create a burden on the Athenians to address them.
Like to me the obvious starting point would be to list as many of the instances of this happening as possible and to better get a picture of what’s going on. Which writers are we losing, what writings have they contributed with, and what interactions made them leave? But this sort of rigorous investigation seems kind of counter to the spirit of your complaint.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I don’t want to betray confidences or put words in other people’s mouths, but I can say that I’ve teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I’m not sure what is the essential difference you are highlighting between your description of the analogy and my description.
I don’t want to betray confidences or put words in other people’s mouths,
Maybe you can ping the people involved and encourage them to leave a description of their experiences here if they want?
but I can say that teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
I still think I had a reasonable point with my comments on the lizardman post, and Zack’s response to your post seems to be another example along the lines of my post.
It seems to me that the basic issue with your post is that you were calling for assumptions of good faith. But good faith is not always present, and so this leads to objections from the people who have faced severe bad faith.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard. For instance, I would not be surprised if Zack would have appreciated your post if you had written “One exception is trans issues, where rationalist leaders typically act in bad faith. See Zack’s counters. Make sure to shame rationalist leaders for that until they start acting in good faith.”.
Of course, then there’s the possibility that your call for good faith hits other people than Zack, e.g. AFAIK there’s been some bad faith towards Michael Vassar. But if you help propagate information about bad faith in general, then in general people who face bad faith won’t have reasons to attack your writings.
In the long run, this would presumably incentivize good faith and so make calls for good faith assumptions more accurate.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard.
That’s optimizing appearances in response to a bug report, instead of fixing the issue, making future bug detection harder. A subtly wrong claim that now harms people less is no less wrong for it.
Not making that claim as a claim of actuality. It could instead be pursued to the same effect as a hypothetical claim, held within a frame of good faith. Then the question of the frame being useful becomes separate from the question of the claim being true, and we can examine both without conflating them, on their own merits.
A frame in this context is a simulacrum/mask/attitude, epistemically suspect by its nature, but capable of useful activity as well as of inspiring/refining valid epistemic gears/features/ideas that are applicable outside of it. When you are training for an ITT, or practicing Scott Alexander’s flavor of charity, you are training a frame, learning awareness of the joints the target’s worldview is carved at. Being large and containing multitudes is about flitting between the frames instead of consistently being something in particular.
That’s an alternate approach one could take to handling the claim, though I don’t see how it’s less optimizing for appearances or more fixing the issue.
Saying “2+2=5 is a hypothetical claim” instead of “2+2=5 actually” is not a wrong claim optimizing for appearances, the appearances are now decisively stripped. It fixes the issue of making an unjustified claim, doesn’t fix the issue of laboring under a possibly false assumption, living in a counterfactual.
But what operates there is now a mask, lightly and cautiously held (like a venomous snake), not the whole of yourself, and not the core of epistemic lawfulness. A mask without the flaw might fail in maintaining the intended group dynamic. It’s unclear if the same effect can as feasibly occur without leaps of faith.
Saying “if xy=xz then you can also assume y=z. Unless x=0 for some reason, hey x pls fix yourself” also does not seem like a wrong claim optimizing for appearances.
I would probably be accused of being part of the sorts of counterproductive criticism you are talking about in your post. I am trying to think whether I have ever faced the sort of criticism myself on LessWrong. I’ve written quite a few posts that have received a significant amount of attention, e.g.:
Are smart people’s personal experiences biased against general intelligence?
Instrumental convergence is what makes general intelligence possible
Latent variables for prediction markets: motivation, technical guide, and design considerations
Random facts can come back to bite you
What sorts of preparations ought I do in case of further escalation in Ukraine?
🤔 Coordination explosion before intelligence explosion...?
Towards a comprehensive study of potential psychological causes of the ordinary range of variation of affective gender identity in males
Maybe I’ve just been lucky, but I guess at least in that case it seems like the base rate for being unlucky is fairly low.
A kind of unfortunate thing about Zack’s response is that he follows the LessWrong taboo on politically charged examples, and that makes it unclear what kinds of disagreements he has in mind in his post. To familiarize yourself with it, you can take a look at Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda, but the TL;DR is that leaders in the rationalist community have often been acting in bad faith wrt. trans issues.
This sort of situation makes calls for assumptions of good faith a threat to Zack, because such calls undermine creating common knowledge about what situation he is in. If you want to be able to make calls for good faith assumptions without triggering Zack, I think the best approach would be to restore good faith by pushing rationalist leaders to make amends to Zack if you have any leverage within the rationalist community. (After all, we certainly can’t assume good faith when there’s clear bad faith going on.) And to come up with structural solutions to prevent it from happening again.
More generally, one thing that can be done to prevent these sorts of traumas from building up is taking more care to resolve objections/complaints. If someone has an issue with something you are doing/saying, then there’s a good chance you are doing/saying something wrong, and by listening to them and correcting yourself, you can eliminate the complaint in a mutually satisfying way.
Gonna bring up another case where someone was critical on LessWrong: someone advocated for socialist firms and I (being critical) spot-checked claims about employee engagement and coop productivity. Does the “Killing Socrates” point apply to my comments here? Why/why not?
Personally, I think those comments are good.
In the first, you take a place where someone has cited very specific numbers, and ask them to (basically) say where those numbers came from. Then, you note note a problem with the claim “if [socialist] then [more productive]” by pointing out that socialist firms haven’t come to dominate the economy.
In the second, you accommodate the other user’s request that you target your spot-check to co-ops, note that your spot-check led you to the opposite conclusion of the other user, and also note a confusion about whether the cited study is even coherent.
I think this is extremely standard, central LW skepticism in its healthy form.
Some things those comments do not do:
Attempt to set the frame of the discussion to something like ~”obviously this objection I have raised is fundamentally a defeater of your whole point; if you do not answer it satisfactorily then your thesis has been debunked,” which is a thing that Socrati tend to do to a pretty substantial degree.
Leap to conclusions about what the author meant, or what their point is tantamount to, and then run off toward the horizon without checking back in. You’re just straightforwardly expressing confusions and curiosities, not putting words in the author’s mouth.
Create a large burden-of-response upon the author. They’re pretty atomic, simple points; each could be replied to without the author having to do the equivalent of writing a whole new essay.
Cause a tree-explosion where, like, the response to Objection A generates Objections B and C, and then the response to Objection B generates Objections D and E, and then the response to Objection C generates Objections F, G, and H, all without a) consideration for whether the other person intends to go that deep, or b) consideration for whether the Objections are anywhere near the crux of the issue.
I apologize; this comment is sort of dashed off and I would’ve liked to have an additional spoon for it; I don’t think I’ve been exhaustive or thorough. But hopefully this gives at least a little more detail.
So maybe a better example of the problem you are talking about is this, where I basically end up in a position of “if you cannot give an explanation of how this neurological study supports your point, then your point is obscurantist? My behavior in this thread could sort of be said to contain all four of the issues you mention. However rereading the original post and thread makes me feel like it was pretty appropriate. There were some things I could have done better, but my response was better than nothing, despite ultimately being a criticism.
I think that sometimes it is true that you and a conversational partner will be in a situation where it really actually seems like the highest-probability hypothesis, by a high margin, is that they can’t explain because their point has no merit.
I think one arrives at such a high probability on such a sad hypothesis due to specific kinds of evidence.
I think that often, people are overconfident that that’s what’s going on, and undercounting hypotheses like “the person wants to have the conversation at a different level of meta” or “they just do not have the time to reply to each of ten different commenters who are highly motivated to spill words” or “the person asking for the explanation is not the kind of person the author is trying to bridge with or convince.”
(This is near the root of my issues with Said—I don’t find Said’s confusions at all predictive or diagnostic of confusion-among-my-actual-audience; Said seems to think that his own lack of understanding means something about the point being confused or insufficiently explained or justified, and I often simply disagree, and think that it’s a him-problem.)
I think that it’s good to usually leave space for people, as a matter of policy—to try not to leave the impression that you think (and thus others should think) that a failure to respond adequately is damning, except where you actually for-real think that a failure to respond adequately is genuinely damning.
Like, it’s important not to whip that social weapon out willy-nilly, because then it’s hard to tell when no, we really mean it, if they can’t answer this question we should actually throw out their claims.
Currently, the environment of LW feels, to me, supersaturated with that sort of implication in a way that robs the genuine examples of their power, similar to how if people refer to every little microaggression as “racism” then calling out an actual racist being actually macro-racist becomes much harder to do.
I don’t think it’s bad or off-limits to say that if someone can’t answer X, their point is invalid, but I think we should reserve that for when it’s both justified and actually seriously meant.
I think that’s a very interesting list of points. I didn’t like the essay at all, and the message didn’t feel right to me, but this post right here makes me a lot more sympathetic to it.
(Which is kind of ironic; you say this comment is dashed off, and you presumably spent a lot more time on the essay; but I’d argue the comment conveys a lot more useful information.)
Indeed: and many many people have issue with what the-LW-users-euphemized-as-Socrates are doing, and they are utterly recalcitrant to change.
Hm...
I mean you’re not 100% wrong that the dissatisfaction indicates that there’s some sort of problem somewhere. I think my counterarguments are also not 100% wrong though.
Maybe the issue is that we are both jumping to conclusions without fully mapping out the underlying issues. Like I mostly don’t know which people have gotten run off LessWrong, nor which interactions caused them to get run off. Maybe a list of examples could help clarify why their experience differs from mine, as well as clarify whether there’s any alternatives to “less LessWrong critics” and “less LessWrong authors” to solve the problem (e.g. maybe standardizing some type of confidence indicators or something could help, hypothetically).
(Though that’s the issue, right? Your point is that mapping out the issue is too much work. So it’s not gonna get done. Maybe the Lightcone LW team could sponsor the mapping? Idk.)
I think there is a simple solution: the people who are currently getting quietly pissed at the Socrati, or who are sucking it up and tolerating them, stop doing so. They start criticizing the criticism, downvoting hard, upvoting the non-Socrati just to correct for the negativity drip, and banning the most prolific Socrati from commenting on their posts.
Instead of laboriously figuring out whether a problem exists, the people for whom Socrati are a problem can use the tools at their disposal to fight back/insulate themselves from worthless and degrading criticism.
Or the Socrati could, you know, listen to the complaints and adjust their behavior. But I don’t see that or some sort of LW research into the true feelings of the frustrated expats actually happening, and I think the best alternative is just to have people like Duncan who are frustrated with the situation just being a lot more vocal and active.
One reason I want more people to know about and use the ban-from-my-post feature is so the mod team* can notice patterns in who gets banned by individuals. Lots of people are disinclined to complain, especially if they believe Socrates is the site culture, censuses of people who quietly left are effort intensive and almost impossible to make representative, but own-post banning is an honest signal that aligns with people’s natural incentives.
*Technically I’m on the mod team but in practice only participate in discussions, rather than take direct action myself.
I think my claim is more “there are genuine competing access needs between Socrati and Athenians, they fundamentally cannot both get all of what they want,” + a personal advocating-for-having-the-Athenians-win, à la “let’s have a well-kept garden.”
I mean that’s your conclusion. But your claim is that the underlying problem/constraint that generates this conclusion is that the Socrati raise a bunch of doubts about stuff and it’s too much work for the Athenians to address those doubts.
Like you say the Athenians have the need for trust so they can build things bigger than themselves, and the Socrati have the need for being allowed to freely critique so they can siphon status by taking cheap shots at the Athenians. And these needs conflict because the cheap shots create a burden on the Athenians to address them.
Like to me the obvious starting point would be to list as many of the instances of this happening as possible and to better get a picture of what’s going on. Which writers are we losing, what writings have they contributed with, and what interactions made them leave? But this sort of rigorous investigation seems kind of counter to the spirit of your complaint.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I don’t want to betray confidences or put words in other people’s mouths, but I can say that I’ve teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
I’m not sure what is the essential difference you are highlighting between your description of the analogy and my description.
Maybe you can ping the people involved and encourage them to leave a description of their experiences here if they want?
I still think I had a reasonable point with my comments on the lizardman post, and Zack’s response to your post seems to be another example along the lines of my post.
It seems to me that the basic issue with your post is that you were calling for assumptions of good faith. But good faith is not always present, and so this leads to objections from the people who have faced severe bad faith.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard. For instance, I would not be surprised if Zack would have appreciated your post if you had written “One exception is trans issues, where rationalist leaders typically act in bad faith. See Zack’s counters. Make sure to shame rationalist leaders for that until they start acting in good faith.”.
Of course, then there’s the possibility that your call for good faith hits other people than Zack, e.g. AFAIK there’s been some bad faith towards Michael Vassar. But if you help propagate information about bad faith in general, then in general people who face bad faith won’t have reasons to attack your writings.
In the long run, this would presumably incentivize good faith and so make calls for good faith assumptions more accurate.
That’s optimizing appearances in response to a bug report, instead of fixing the issue, making future bug detection harder. A subtly wrong claim that now harms people less is no less wrong for it.
What does “fixing the issue” mean in your model? Could you give an example of a change that would genuinely fix the issue?
I more think of my proposal as propogating the bug report to the places where it can get fixed than as optimizing for appearances.
Not making that claim as a claim of actuality. It could instead be pursued to the same effect as a hypothetical claim, held within a frame of good faith. Then the question of the frame being useful becomes separate from the question of the claim being true, and we can examine both without conflating them, on their own merits.
A frame in this context is a simulacrum/mask/attitude, epistemically suspect by its nature, but capable of useful activity as well as of inspiring/refining valid epistemic gears/features/ideas that are applicable outside of it. When you are training for an ITT, or practicing Scott Alexander’s flavor of charity, you are training a frame, learning awareness of the joints the target’s worldview is carved at. Being large and containing multitudes is about flitting between the frames instead of consistently being something in particular.
That’s an alternate approach one could take to handling the claim, though I don’t see how it’s less optimizing for appearances or more fixing the issue.
Saying “2+2=5 is a hypothetical claim” instead of “2+2=5 actually” is not a wrong claim optimizing for appearances, the appearances are now decisively stripped. It fixes the issue of making an unjustified claim, doesn’t fix the issue of laboring under a possibly false assumption, living in a counterfactual.
But what operates there is now a mask, lightly and cautiously held (like a venomous snake), not the whole of yourself, and not the core of epistemic lawfulness. A mask without the flaw might fail in maintaining the intended group dynamic. It’s unclear if the same effect can as feasibly occur without leaps of faith.
Saying “if xy=xz then you can also assume y=z. Unless x=0 for some reason, hey x pls fix yourself” also does not seem like a wrong claim optimizing for appearances.