This all sounds approximately right (point #6 is where I touch upon this aspect of the model. I didn’t dwell on it since there were a lot of other things to dwell on)
My claim is something like “yes, performative conversation is often the default, but it performative conversation makes it harder to find and agree on truth. So if that is your goal, taking it private it will help. If that’s not your goal, taking it private may not help.”
(meta side: I notice you splitting comments due to length, and not sure if that’s due to aesthetic or because of something about the site preventing long comments. I’ve been able to type long comments without issue so wasn’t sure)
I think it’s been mentioned a couple times that the site does not have any limits on comment length. If you’re having trouble posting long comments, can you elaborate on what happens when you try?
From what I understand Said is currently posting comments through greaterwrong.com, which is a site that uses our API to provide their own view on the LesserWrong.com content. Our API currently has a character limit for posting markdown directly into comments (which is an accidental result from some of the frameworks we are using), and I think greaterwrong.com is running into that problem.
Hmm. I feel like I might not have quite gotten my point across (which is possibly because it was nearly 5 AM when I posted that comment). I can’t yet tell if we disagree or if I simply haven’t made clear what I’m saying, so let me try to expand a bit on this.
You say:
yes, performative conversation is often the default, but it performative conversation makes it harder to find and agree on truth.
This seems to suggest a model where two people are engaging in collaborative truth-seeking, but—because they’re doing this in public—performativeness is a quality that their conversation ends up having, which interferes with their goal.
I, on the other hand, am suggesting a model where the performative aspect is inseparable from the goal, where it, in a serious sense, is [a large part of] the goal.
Now, maybe it’s just that we differ in our estimation of how prevalent this is (or how prevalent it is here). But… it seems to me to be a fairly safe supposition that even if conversation-as-performance[1] is less common on Less Wrong than elsewhere (relative to “conversation as collaborative truth-seeking”), it is probably almost always what’s going on in what you call “demon thread”.
But this means that taking the conversation private will basically never help.
I’m fairly confident that we’re (roughly) understanding each other, but have some underlying differences on a combination of a) how the world currently is, b) how the LW world is right now, c) what’s desireable and achievable for LW culture.
(Actually I think we probably agree on how the world in general is).
I think that’s beyond the scope of the conversation I want to have on this post though.
Fair enough, and I tentatively agree with your evaluation.
I do think that this broader conversation is important to have at some point (though, indeed, this post is not the place for it)—because whether this (or, indeed, any other) scheme succeeds, depends on its outcome.
Agreed. For now, I just want to be clear that I think the tactic outlined in this post only makes sense if you’re using the overall strategy listed in this parent comment, and I think whether that strategy makes sense depends on whatever your current situation is.
This all sounds approximately right (point #6 is where I touch upon this aspect of the model. I didn’t dwell on it since there were a lot of other things to dwell on)
My claim is something like “yes, performative conversation is often the default, but it performative conversation makes it harder to find and agree on truth. So if that is your goal, taking it private it will help. If that’s not your goal, taking it private may not help.”
(meta side: I notice you splitting comments due to length, and not sure if that’s due to aesthetic or because of something about the site preventing long comments. I’ve been able to type long comments without issue so wasn’t sure)
It’s the latter (and I am told it’s being worked on, which is why I haven’t posted to complain about it).
I think it’s been mentioned a couple times that the site does not have any limits on comment length. If you’re having trouble posting long comments, can you elaborate on what happens when you try?
From what I understand Said is currently posting comments through greaterwrong.com, which is a site that uses our API to provide their own view on the LesserWrong.com content. Our API currently has a character limit for posting markdown directly into comments (which is an accidental result from some of the frameworks we are using), and I think greaterwrong.com is running into that problem.
Hmm. I feel like I might not have quite gotten my point across (which is possibly because it was nearly 5 AM when I posted that comment). I can’t yet tell if we disagree or if I simply haven’t made clear what I’m saying, so let me try to expand a bit on this.
You say:
This seems to suggest a model where two people are engaging in collaborative truth-seeking, but—because they’re doing this in public—performativeness is a quality that their conversation ends up having, which interferes with their goal.
I, on the other hand, am suggesting a model where the performative aspect is inseparable from the goal, where it, in a serious sense, is [a large part of] the goal.
Now, maybe it’s just that we differ in our estimation of how prevalent this is (or how prevalent it is here). But… it seems to me to be a fairly safe supposition that even if conversation-as-performance[1] is less common on Less Wrong than elsewhere (relative to “conversation as collaborative truth-seeking”), it is probably almost always what’s going on in what you call “demon thread”.
But this means that taking the conversation private will basically never help.
I’m fairly confident that we’re (roughly) understanding each other, but have some underlying differences on a combination of a) how the world currently is, b) how the LW world is right now, c) what’s desireable and achievable for LW culture.
(Actually I think we probably agree on how the world in general is).
I think that’s beyond the scope of the conversation I want to have on this post though.
Fair enough, and I tentatively agree with your evaluation.
I do think that this broader conversation is important to have at some point (though, indeed, this post is not the place for it)—because whether this (or, indeed, any other) scheme succeeds, depends on its outcome.
Agreed. For now, I just want to be clear that I think the tactic outlined in this post only makes sense if you’re using the overall strategy listed in this parent comment, and I think whether that strategy makes sense depends on whatever your current situation is.