I mean you’re not 100% wrong that the dissatisfaction indicates that there’s some sort of problem somewhere. I think my counterarguments are also not 100% wrong though.
Maybe the issue is that we are both jumping to conclusions without fully mapping out the underlying issues. Like I mostly don’t know which people have gotten run off LessWrong, nor which interactions caused them to get run off. Maybe a list of examples could help clarify why their experience differs from mine, as well as clarify whether there’s any alternatives to “less LessWrong critics” and “less LessWrong authors” to solve the problem (e.g. maybe standardizing some type of confidence indicators or something could help, hypothetically).
(Though that’s the issue, right? Your point is that mapping out the issue is too much work. So it’s not gonna get done. Maybe the Lightcone LW team could sponsor the mapping? Idk.)
I think there is a simple solution: the people who are currently getting quietly pissed at the Socrati, or who are sucking it up and tolerating them, stop doing so. They start criticizing the criticism, downvoting hard, upvoting the non-Socrati just to correct for the negativity drip, and banning the most prolific Socrati from commenting on their posts.
Instead of laboriously figuring out whether a problem exists, the people for whom Socrati are a problem can use the tools at their disposal to fight back/insulate themselves from worthless and degrading criticism.
Or the Socrati could, you know, listen to the complaints and adjust their behavior. But I don’t see that or some sort of LW research into the true feelings of the frustrated expats actually happening, and I think the best alternative is just to have people like Duncan who are frustrated with the situation just being a lot more vocal and active.
One reason I want more people to know about and use the ban-from-my-post feature is so the mod team* can notice patterns in who gets banned by individuals. Lots of people are disinclined to complain, especially if they believe Socrates is the site culture, censuses of people who quietly left are effort intensive and almost impossible to make representative, but own-post banning is an honest signal that aligns with people’s natural incentives.
*Technically I’m on the mod team but in practice only participate in discussions, rather than take direct action myself.
I think my claim is more “there are genuine competing access needs between Socrati and Athenians, they fundamentally cannot both get all of what they want,” + a personal advocating-for-having-the-Athenians-win, à la “let’s have a well-kept garden.”
I mean that’s your conclusion. But your claim is that the underlying problem/constraint that generates this conclusion is that the Socrati raise a bunch of doubts about stuff and it’s too much work for the Athenians to address those doubts.
Like you say the Athenians have the need for trust so they can build things bigger than themselves, and the Socrati have the need for being allowed to freely critique so they can siphon status by taking cheap shots at the Athenians. And these needs conflict because the cheap shots create a burden on the Athenians to address them.
Like to me the obvious starting point would be to list as many of the instances of this happening as possible and to better get a picture of what’s going on. Which writers are we losing, what writings have they contributed with, and what interactions made them leave? But this sort of rigorous investigation seems kind of counter to the spirit of your complaint.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I don’t want to betray confidences or put words in other people’s mouths, but I can say that I’ve teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I’m not sure what is the essential difference you are highlighting between your description of the analogy and my description.
I don’t want to betray confidences or put words in other people’s mouths,
Maybe you can ping the people involved and encourage them to leave a description of their experiences here if they want?
but I can say that teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
I still think I had a reasonable point with my comments on the lizardman post, and Zack’s response to your post seems to be another example along the lines of my post.
It seems to me that the basic issue with your post is that you were calling for assumptions of good faith. But good faith is not always present, and so this leads to objections from the people who have faced severe bad faith.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard. For instance, I would not be surprised if Zack would have appreciated your post if you had written “One exception is trans issues, where rationalist leaders typically act in bad faith. See Zack’s counters. Make sure to shame rationalist leaders for that until they start acting in good faith.”.
Of course, then there’s the possibility that your call for good faith hits other people than Zack, e.g. AFAIK there’s been some bad faith towards Michael Vassar. But if you help propagate information about bad faith in general, then in general people who face bad faith won’t have reasons to attack your writings.
In the long run, this would presumably incentivize good faith and so make calls for good faith assumptions more accurate.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard.
That’s optimizing appearances in response to a bug report, instead of fixing the issue, making future bug detection harder. A subtly wrong claim that now harms people less is no less wrong for it.
Not making that claim as a claim of actuality. It could instead be pursued to the same effect as a hypothetical claim, held within a frame of good faith. Then the question of the frame being useful becomes separate from the question of the claim being true, and we can examine both without conflating them, on their own merits.
A frame in this context is a simulacrum/mask/attitude, epistemically suspect by its nature, but capable of useful activity as well as of inspiring/refining valid epistemic gears/features/ideas that are applicable outside of it. When you are training for an ITT, or practicing Scott Alexander’s flavor of charity, you are training a frame, learning awareness of the joints the target’s worldview is carved at. Being large and containing multitudes is about flitting between the frames instead of consistently being something in particular.
That’s an alternate approach one could take to handling the claim, though I don’t see how it’s less optimizing for appearances or more fixing the issue.
Saying “2+2=5 is a hypothetical claim” instead of “2+2=5 actually” is not a wrong claim optimizing for appearances, the appearances are now decisively stripped. It fixes the issue of making an unjustified claim, doesn’t fix the issue of laboring under a possibly false assumption, living in a counterfactual.
But what operates there is now a mask, lightly and cautiously held (like a venomous snake), not the whole of yourself, and not the core of epistemic lawfulness. A mask without the flaw might fail in maintaining the intended group dynamic. It’s unclear if the same effect can as feasibly occur without leaps of faith.
Saying “if xy=xz then you can also assume y=z. Unless x=0 for some reason, hey x pls fix yourself” also does not seem like a wrong claim optimizing for appearances.
Indeed: and many many people have issue with what the-LW-users-euphemized-as-Socrates are doing, and they are utterly recalcitrant to change.
Hm...
I mean you’re not 100% wrong that the dissatisfaction indicates that there’s some sort of problem somewhere. I think my counterarguments are also not 100% wrong though.
Maybe the issue is that we are both jumping to conclusions without fully mapping out the underlying issues. Like I mostly don’t know which people have gotten run off LessWrong, nor which interactions caused them to get run off. Maybe a list of examples could help clarify why their experience differs from mine, as well as clarify whether there’s any alternatives to “less LessWrong critics” and “less LessWrong authors” to solve the problem (e.g. maybe standardizing some type of confidence indicators or something could help, hypothetically).
(Though that’s the issue, right? Your point is that mapping out the issue is too much work. So it’s not gonna get done. Maybe the Lightcone LW team could sponsor the mapping? Idk.)
I think there is a simple solution: the people who are currently getting quietly pissed at the Socrati, or who are sucking it up and tolerating them, stop doing so. They start criticizing the criticism, downvoting hard, upvoting the non-Socrati just to correct for the negativity drip, and banning the most prolific Socrati from commenting on their posts.
Instead of laboriously figuring out whether a problem exists, the people for whom Socrati are a problem can use the tools at their disposal to fight back/insulate themselves from worthless and degrading criticism.
Or the Socrati could, you know, listen to the complaints and adjust their behavior. But I don’t see that or some sort of LW research into the true feelings of the frustrated expats actually happening, and I think the best alternative is just to have people like Duncan who are frustrated with the situation just being a lot more vocal and active.
One reason I want more people to know about and use the ban-from-my-post feature is so the mod team* can notice patterns in who gets banned by individuals. Lots of people are disinclined to complain, especially if they believe Socrates is the site culture, censuses of people who quietly left are effort intensive and almost impossible to make representative, but own-post banning is an honest signal that aligns with people’s natural incentives.
*Technically I’m on the mod team but in practice only participate in discussions, rather than take direct action myself.
I think my claim is more “there are genuine competing access needs between Socrati and Athenians, they fundamentally cannot both get all of what they want,” + a personal advocating-for-having-the-Athenians-win, à la “let’s have a well-kept garden.”
I mean that’s your conclusion. But your claim is that the underlying problem/constraint that generates this conclusion is that the Socrati raise a bunch of doubts about stuff and it’s too much work for the Athenians to address those doubts.
Like you say the Athenians have the need for trust so they can build things bigger than themselves, and the Socrati have the need for being allowed to freely critique so they can siphon status by taking cheap shots at the Athenians. And these needs conflict because the cheap shots create a burden on the Athenians to address them.
Like to me the obvious starting point would be to list as many of the instances of this happening as possible and to better get a picture of what’s going on. Which writers are we losing, what writings have they contributed with, and what interactions made them leave? But this sort of rigorous investigation seems kind of counter to the spirit of your complaint.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I don’t want to betray confidences or put words in other people’s mouths, but I can say that I’ve teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
I’m not sure what is the essential difference you are highlighting between your description of the analogy and my description.
Maybe you can ping the people involved and encourage them to leave a description of their experiences here if they want?
I still think I had a reasonable point with my comments on the lizardman post, and Zack’s response to your post seems to be another example along the lines of my post.
It seems to me that the basic issue with your post is that you were calling for assumptions of good faith. But good faith is not always present, and so this leads to objections from the people who have faced severe bad faith.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard. For instance, I would not be surprised if Zack would have appreciated your post if you had written “One exception is trans issues, where rationalist leaders typically act in bad faith. See Zack’s counters. Make sure to shame rationalist leaders for that until they start acting in good faith.”.
Of course, then there’s the possibility that your call for good faith hits other people than Zack, e.g. AFAIK there’s been some bad faith towards Michael Vassar. But if you help propagate information about bad faith in general, then in general people who face bad faith won’t have reasons to attack your writings.
In the long run, this would presumably incentivize good faith and so make calls for good faith assumptions more accurate.
That’s optimizing appearances in response to a bug report, instead of fixing the issue, making future bug detection harder. A subtly wrong claim that now harms people less is no less wrong for it.
What does “fixing the issue” mean in your model? Could you give an example of a change that would genuinely fix the issue?
I more think of my proposal as propogating the bug report to the places where it can get fixed than as optimizing for appearances.
Not making that claim as a claim of actuality. It could instead be pursued to the same effect as a hypothetical claim, held within a frame of good faith. Then the question of the frame being useful becomes separate from the question of the claim being true, and we can examine both without conflating them, on their own merits.
A frame in this context is a simulacrum/mask/attitude, epistemically suspect by its nature, but capable of useful activity as well as of inspiring/refining valid epistemic gears/features/ideas that are applicable outside of it. When you are training for an ITT, or practicing Scott Alexander’s flavor of charity, you are training a frame, learning awareness of the joints the target’s worldview is carved at. Being large and containing multitudes is about flitting between the frames instead of consistently being something in particular.
That’s an alternate approach one could take to handling the claim, though I don’t see how it’s less optimizing for appearances or more fixing the issue.
Saying “2+2=5 is a hypothetical claim” instead of “2+2=5 actually” is not a wrong claim optimizing for appearances, the appearances are now decisively stripped. It fixes the issue of making an unjustified claim, doesn’t fix the issue of laboring under a possibly false assumption, living in a counterfactual.
But what operates there is now a mask, lightly and cautiously held (like a venomous snake), not the whole of yourself, and not the core of epistemic lawfulness. A mask without the flaw might fail in maintaining the intended group dynamic. It’s unclear if the same effect can as feasibly occur without leaps of faith.
Saying “if xy=xz then you can also assume y=z. Unless x=0 for some reason, hey x pls fix yourself” also does not seem like a wrong claim optimizing for appearances.