I think my claim is more “there are genuine competing access needs between Socrati and Athenians, they fundamentally cannot both get all of what they want,” + a personal advocating-for-having-the-Athenians-win, à la “let’s have a well-kept garden.”
I mean that’s your conclusion. But your claim is that the underlying problem/constraint that generates this conclusion is that the Socrati raise a bunch of doubts about stuff and it’s too much work for the Athenians to address those doubts.
Like you say the Athenians have the need for trust so they can build things bigger than themselves, and the Socrati have the need for being allowed to freely critique so they can siphon status by taking cheap shots at the Athenians. And these needs conflict because the cheap shots create a burden on the Athenians to address them.
Like to me the obvious starting point would be to list as many of the instances of this happening as possible and to better get a picture of what’s going on. Which writers are we losing, what writings have they contributed with, and what interactions made them leave? But this sort of rigorous investigation seems kind of counter to the spirit of your complaint.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I don’t want to betray confidences or put words in other people’s mouths, but I can say that I’ve teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I’m not sure what is the essential difference you are highlighting between your description of the analogy and my description.
I don’t want to betray confidences or put words in other people’s mouths,
Maybe you can ping the people involved and encourage them to leave a description of their experiences here if they want?
but I can say that teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
I still think I had a reasonable point with my comments on the lizardman post, and Zack’s response to your post seems to be another example along the lines of my post.
It seems to me that the basic issue with your post is that you were calling for assumptions of good faith. But good faith is not always present, and so this leads to objections from the people who have faced severe bad faith.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard. For instance, I would not be surprised if Zack would have appreciated your post if you had written “One exception is trans issues, where rationalist leaders typically act in bad faith. See Zack’s counters. Make sure to shame rationalist leaders for that until they start acting in good faith.”.
Of course, then there’s the possibility that your call for good faith hits other people than Zack, e.g. AFAIK there’s been some bad faith towards Michael Vassar. But if you help propagate information about bad faith in general, then in general people who face bad faith won’t have reasons to attack your writings.
In the long run, this would presumably incentivize good faith and so make calls for good faith assumptions more accurate.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard.
That’s optimizing appearances in response to a bug report, instead of fixing the issue, making future bug detection harder. A subtly wrong claim that now harms people less is no less wrong for it.
Not making that claim as a claim of actuality. It could instead be pursued to the same effect as a hypothetical claim, held within a frame of good faith. Then the question of the frame being useful becomes separate from the question of the claim being true, and we can examine both without conflating them, on their own merits.
A frame in this context is a simulacrum/mask/attitude, epistemically suspect by its nature, but capable of useful activity as well as of inspiring/refining valid epistemic gears/features/ideas that are applicable outside of it. When you are training for an ITT, or practicing Scott Alexander’s flavor of charity, you are training a frame, learning awareness of the joints the target’s worldview is carved at. Being large and containing multitudes is about flitting between the frames instead of consistently being something in particular.
That’s an alternate approach one could take to handling the claim, though I don’t see how it’s less optimizing for appearances or more fixing the issue.
Saying “2+2=5 is a hypothetical claim” instead of “2+2=5 actually” is not a wrong claim optimizing for appearances, the appearances are now decisively stripped. It fixes the issue of making an unjustified claim, doesn’t fix the issue of laboring under a possibly false assumption, living in a counterfactual.
But what operates there is now a mask, lightly and cautiously held (like a venomous snake), not the whole of yourself, and not the core of epistemic lawfulness. A mask without the flaw might fail in maintaining the intended group dynamic. It’s unclear if the same effect can as feasibly occur without leaps of faith.
Saying “if xy=xz then you can also assume y=z. Unless x=0 for some reason, hey x pls fix yourself” also does not seem like a wrong claim optimizing for appearances.
I think my claim is more “there are genuine competing access needs between Socrati and Athenians, they fundamentally cannot both get all of what they want,” + a personal advocating-for-having-the-Athenians-win, à la “let’s have a well-kept garden.”
I mean that’s your conclusion. But your claim is that the underlying problem/constraint that generates this conclusion is that the Socrati raise a bunch of doubts about stuff and it’s too much work for the Athenians to address those doubts.
Like you say the Athenians have the need for trust so they can build things bigger than themselves, and the Socrati have the need for being allowed to freely critique so they can siphon status by taking cheap shots at the Athenians. And these needs conflict because the cheap shots create a burden on the Athenians to address them.
Like to me the obvious starting point would be to list as many of the instances of this happening as possible and to better get a picture of what’s going on. Which writers are we losing, what writings have they contributed with, and what interactions made them leave? But this sort of rigorous investigation seems kind of counter to the spirit of your complaint.
Nnnnot quite; that’s not the analogy I intended to Athens.
Rather, what I’m saying is that the Socrati make the process of creating and sharing thoughts on LW much more costly than it otherwise would be, which drives authors away, which makes LW worse.
I don’t want to betray confidences or put words in other people’s mouths, but I can say that I’ve teetered on the verge of abandoning LW over the past couple of months, entirely due to large volumes of frustratingly useless and soul-sucking commentary in the vein of Socrates, one notable example being our interactions on the Lizardman post.
I’m not sure what is the essential difference you are highlighting between your description of the analogy and my description.
Maybe you can ping the people involved and encourage them to leave a description of their experiences here if they want?
I still think I had a reasonable point with my comments on the lizardman post, and Zack’s response to your post seems to be another example along the lines of my post.
It seems to me that the basic issue with your post is that you were calling for assumptions of good faith. But good faith is not always present, and so this leads to objections from the people who have faced severe bad faith.
If you don’t want objections to calls for good faith assumptions, you really gotta do something to make them not hit the people who face bad faith so hard. For instance, I would not be surprised if Zack would have appreciated your post if you had written “One exception is trans issues, where rationalist leaders typically act in bad faith. See Zack’s counters. Make sure to shame rationalist leaders for that until they start acting in good faith.”.
Of course, then there’s the possibility that your call for good faith hits other people than Zack, e.g. AFAIK there’s been some bad faith towards Michael Vassar. But if you help propagate information about bad faith in general, then in general people who face bad faith won’t have reasons to attack your writings.
In the long run, this would presumably incentivize good faith and so make calls for good faith assumptions more accurate.
That’s optimizing appearances in response to a bug report, instead of fixing the issue, making future bug detection harder. A subtly wrong claim that now harms people less is no less wrong for it.
What does “fixing the issue” mean in your model? Could you give an example of a change that would genuinely fix the issue?
I more think of my proposal as propogating the bug report to the places where it can get fixed than as optimizing for appearances.
Not making that claim as a claim of actuality. It could instead be pursued to the same effect as a hypothetical claim, held within a frame of good faith. Then the question of the frame being useful becomes separate from the question of the claim being true, and we can examine both without conflating them, on their own merits.
A frame in this context is a simulacrum/mask/attitude, epistemically suspect by its nature, but capable of useful activity as well as of inspiring/refining valid epistemic gears/features/ideas that are applicable outside of it. When you are training for an ITT, or practicing Scott Alexander’s flavor of charity, you are training a frame, learning awareness of the joints the target’s worldview is carved at. Being large and containing multitudes is about flitting between the frames instead of consistently being something in particular.
That’s an alternate approach one could take to handling the claim, though I don’t see how it’s less optimizing for appearances or more fixing the issue.
Saying “2+2=5 is a hypothetical claim” instead of “2+2=5 actually” is not a wrong claim optimizing for appearances, the appearances are now decisively stripped. It fixes the issue of making an unjustified claim, doesn’t fix the issue of laboring under a possibly false assumption, living in a counterfactual.
But what operates there is now a mask, lightly and cautiously held (like a venomous snake), not the whole of yourself, and not the core of epistemic lawfulness. A mask without the flaw might fail in maintaining the intended group dynamic. It’s unclear if the same effect can as feasibly occur without leaps of faith.
Saying “if xy=xz then you can also assume y=z. Unless x=0 for some reason, hey x pls fix yourself” also does not seem like a wrong claim optimizing for appearances.