Whether it’s better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they’re doing.
Out of all the things you would have added to orthonormal’s comment, the only one that I didn’t read at the time as explicit or implicit in zir comment was, “Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5”. I agree it would be nice if people gave more information about their own calibration where available. I don’t know whether it was available to orthonormal.
As for the rest, I’m sticking that at the end of this comment as a sort of appendix.
If I’m right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don’t see how I could have figured out that this is what our actual disagreement was.
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don’t feel like we need to fork anything based on the distance between our positions.
Why do you think the intensity scalars are so different between us?
***
All right, here comes some subjective experience. I’m offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.
The comment makes it clear that it is subjective experience. I wouldn’t expect ortho to add it if ze didn’t think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.
I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was “this person wants to be a cult leader.” This was based on [specific number of minutes] of conversation.
I don’t expect ortho to remember the number of minutes from nine years ago.
The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations.
I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010′s.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
I feel like this is important and relevant because it seems like yet again we’re in a situation where a bunch of people are going “gosh, such shock, how could we have known?”
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
The delta between my wannabe-cult-leader-detectors and everyone else’s is large, and I don’t know its source, but the same thing happened with [don’t name him, don’t summon him], who was booted from the Berkeley community for good reason.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
I don’t think opaque intuition should be blindly followed, but as everyone is reeling from Zoe’s account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience. Crux?
Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1⁄5, and that means there’s probably something to be learned from me and people like me.
See above.
If you’re actually looking for ways to make this better in the future, anyway.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post.
Hmmm, something has gone wrong. This is not the case, and I’m not sure what caused you to think it was the case.
“How explicit comments need to be regarding their own epistemic status” is a single star in the constellation of considerations that caused me to write the post. It’s one of the many ways in which I see people doing things that slightly decrease our collective ability to see what’s true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.
But it’s in no way the central casus belli of the OP. The constellation is. So my answer to “Why do you think the intensity scalars are so different between us?” is “maybe they aren’t? I didn’t mean the thing you were surprised by.”
I don’t expect ortho to remember the number of minutes from nine years ago...I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I’m reminded of the time that some researchers investigated what various people meant by the phrase “a very real chance,” and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).
It’s true that numbers aren’t super reliable, but even estimated/ballpark numbers (you’ll note I wrote the phrase “as many as” and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations. The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation. To provide data that distinguishes between possibilities.
The comment makes it clear that it is subjective experience.
True. (I reiterate, feeling a smidge defensive, that I’ve said more than once that the comment was net-positive as written, and so don’t wish to have to defend a claim like “it absolutely should have been different in this way!” That’s not a claim I’m making. I’m making the much weaker claim that my rewrite was better. Not that the original was insufficient.)
The thing that I’m pulling for, with the greater explicitness about its subjectivity …
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to do this. But I think it’s clearly stronger if it does.
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
I validate that. But I suspect you would not claim that their reason doesn’t matter at all, to anyone. And I suspect you would not claim that a substantial chunk of LWers aren’t guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds. The rewrite included more attempts to rule out everything else than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what’s going on, and reduces the fog and confusion and rate of misunderstandings.
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience.
I don’t think that’s true at all. I think that there are several different implications compatible with the act of posting ortho’s comment, and that “I’m suggesting that you weight my opinion more heavily based on me being right in this case” is only one such implication, and that it’s valuable to be specific about what you’re doing and why because other people don’t actually just “get” it. The illusion of transparency is a hell of a drug, and so is the typical mind fallacy. Both when you’re writing, and assume that people will just magically know what you’re trying to accomplish, and when you’re reading, and assume that everyone else’s interpretation will be pretty close to your own.
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to head off that sort of misunderstanding at the pass. But I think it’s clearly better if it does so.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me.
I actually included that sentence because I felt like ortho’s original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement. So I think we’re not in disagreement on that.
Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation.
Also understood: you and I have different preferences for explicitly stating underlying claims. I don’t think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement. Striking that balance is Hard.
I think we’ve drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.
I’m going to take a stab at cruxing here.
Whether it’s better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they’re doing.
Out of all the things you would have added to orthonormal’s comment, the only one that I didn’t read at the time as explicit or implicit in zir comment was, “Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5”. I agree it would be nice if people gave more information about their own calibration where available. I don’t know whether it was available to orthonormal.
As for the rest, I’m sticking that at the end of this comment as a sort of appendix.
If I’m right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don’t see how I could have figured out that this is what our actual disagreement was.
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don’t feel like we need to fork anything based on the distance between our positions.
Why do you think the intensity scalars are so different between us?
***
The comment makes it clear that it is subjective experience. I wouldn’t expect ortho to add it if ze didn’t think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.
I don’t expect ortho to remember the number of minutes from nine years ago.
I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience. Crux?
See above.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?
Hmmm, something has gone wrong. This is not the case, and I’m not sure what caused you to think it was the case.
“How explicit comments need to be regarding their own epistemic status” is a single star in the constellation of considerations that caused me to write the post. It’s one of the many ways in which I see people doing things that slightly decrease our collective ability to see what’s true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.
But it’s in no way the central casus belli of the OP. The constellation is. So my answer to “Why do you think the intensity scalars are so different between us?” is “maybe they aren’t? I didn’t mean the thing you were surprised by.”
Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I’m reminded of the time that some researchers investigated what various people meant by the phrase “a very real chance,” and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).
It’s true that numbers aren’t super reliable, but even estimated/ballpark numbers (you’ll note I wrote the phrase “as many as” and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations. The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation. To provide data that distinguishes between possibilities.
True. (I reiterate, feeling a smidge defensive, that I’ve said more than once that the comment was net-positive as written, and so don’t wish to have to defend a claim like “it absolutely should have been different in this way!” That’s not a claim I’m making. I’m making the much weaker claim that my rewrite was better. Not that the original was insufficient.)
The thing that I’m pulling for, with the greater explicitness about its subjectivity …
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to do this. But I think it’s clearly stronger if it does.
I validate that. But I suspect you would not claim that their reason doesn’t matter at all, to anyone. And I suspect you would not claim that a substantial chunk of LWers aren’t guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds. The rewrite included more attempts to rule out everything else than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what’s going on, and reduces the fog and confusion and rate of misunderstandings.
I don’t think that’s true at all. I think that there are several different implications compatible with the act of posting ortho’s comment, and that “I’m suggesting that you weight my opinion more heavily based on me being right in this case” is only one such implication, and that it’s valuable to be specific about what you’re doing and why because other people don’t actually just “get” it. The illusion of transparency is a hell of a drug, and so is the typical mind fallacy. Both when you’re writing, and assume that people will just magically know what you’re trying to accomplish, and when you’re reading, and assume that everyone else’s interpretation will be pretty close to your own.
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to head off that sort of misunderstanding at the pass. But I think it’s clearly better if it does so.
I actually included that sentence because I felt like ortho’s original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement. So I think we’re not in disagreement on that.
Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation.
Also understood: you and I have different preferences for explicitly stating underlying claims. I don’t think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement. Striking that balance is Hard.
I think we’ve drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.