Can you say more? What should the description say instead? (I’m guessing you’re referring to the fact that the post has some subtext that probably isn’t a good topic fit for Less Wrong? But I would argue that the text (using the blegg/rube parable setting to make another point about the cognitive function of categorization) totally is relevant and potentially interesting!)
“Fanfiction for the blegg/rube parable” and “to make another point about the cognitive function of categorization” are both completely ignoring the very large elephant in the rather small room.
The actual topic of the piece is clearly the currently hot topic of How To Think About Trans People. (Words like “trans” and “gender” are never mentioned, but it becomes obvious maybe four or five paragraphs in.) Which is a sufficiently mindkilling topic for sufficiently many people that maybe it’s worth mentioning.
(Or maybe not; you might argue that actually readers are more likely to be able to read the thing without getting mindkilled if their attention isn’t drawn to the mindkilling implications. But I don’t think many of those likely to be mindkilled will miss those implications; better to be up front about them.)
When I first read the post, I did not notice any reference to any mindkilling topics and was actually quite confused and surprised when I saw the comments about all of this being about something super political, and still found the post moderately useful. So I do think that I am a counterexample to your “I don’t think many of those likely to be mindkilled will miss those implications; ” argument.
I’m not sure you are, since it seems you weren’t at all mindkilled by it. I could be wrong, though; if, once you saw the implications, it took nontrivial effort to see past them, then I agree you’re a counterexample.
… you’re right. (I like the aesthetics of the “deniable allegory” writing style, but delusionally expecting to get away with it is trying to have one’s cake and eat it, too.) I added a “Content notice” to the description here.
I know it’s rather a side issue, but personally I hate the “deniable allegory” style, though LW is probably a better fit for it than most places …
1. The temptation to say literally-X-but-implying-Y and then respond to someone arguing against Y with “oh, but I wasn’t saying that at all, I was only saying X; how very unreasonable of you to read all that stuff into what I wrote!” is too often too difficult to resist.
2. Even if the deniable-allegorist refrains from any such shenanigans, the fear of them (as a result of being hit by such things in the past by deniable allegorists with fewer scruples) makes it an unpleasant business for anyone who finds themselves disagreeing with any of the implications.
3. And of course the reason why that tactic works is that often one does misunderstand the import of the allegory; a mode of discussion that invites misunderstandings is (to me) disagreeable.
4. The allegorical style can say, or at least gesture towards, a lot of stuff in a small space. This means that anyone trying to respond to it in literal style is liable to look like an awful pedant. On the other hand, if you try to meet an allegory with another allegory, (a) that’s hard to do well and (b) after one or two rounds the chances are that everyone is talking past everyone else. Which might be fun but probably isn’t productive.
Thanks. In retrospect, possibly a better approach for this venue would have been to carefully rewrite the piece for Less Wrong in a way that strips more subtext/conceals more of the elephant (e.g., cut the “disrespecting that effort” paragraph).
I think, to make it work for my conception of LW, you’d also want to acknowledge other approaches (staying with 2 categories and weighting the attributes, staying with 2 categories and just acknowledging they’re imperfect, giving up on categories and specifying attributes individually, possibly with predictions of hidden attributes, adding more categories and choosing based on the dimension with biggest deviation from average, etc.), and identify when they’re more appropriate than your preferred approach.
WTF. I didn’t downvote (until now), but didn’t see any point to so many words basically saying “labels are lossy compression, get over it”.
Now that I actually notice the website name and understand that it’s an allegory for a debate that doesn’t belong here (unless gender categorization somehow is important to LW posts), I believe it also doesn’t belong here. I believe that it doesn’t belong here regardless of which side I support (and I don’t have any clue what the debate is, so I don’t know what the lines are or which side, if any, I support).
Quick note that the mod team had been observing this post and the surrounding discussion and not 100% sure how to think about it. The post itself is sufficiently abstracted that unless you’re already aware of the political discussion, it seemed fairly innocuous. Once you’re aware of the political discussion it’s fairly blatant. It’s unclear to me how bad this is.
I do not have much confidence in any of the policies we could pick and stick to here. I’ve been mostly satisfied with the resulting conversation on LW staying pretty abstract and meta level.
Perhaps also worth noting: I was looking through two other recent posts, Tale of Alice Almost and In My Culture, through a similar lens. They each give me the impression that they are relating in some way to a political dispute which has been abstracted away, with a vague feeling that the resulting post may somehow still be a part of the political struggle.
I’d like to a have a moderation policy (primarily about whether such posts get frontpaged) that works regardless of whether I actually know anything about any behind-the-scenes drama. I’ve mulled over a few different such policies, each of which would result in different outcomes as to which of the three posts would get frontpaged. But in each case the three posts are hovering near the edge of however I’d classify them.
(The mod team was fairly divided on how important a lens this was and/or exactly how to think about, so just take this as my own personal thoughts for now)
My current model is that I am in favor of people trying to come up with general analogies, even if they are in the middle of thinking about mindkilling topics. I feel like people have all kinds of weird motivations for writing posts, and trying to judge and classify based on them is going to be hard and set up weird metacognitive incentives, whereas just deciding whether something is useful for trying to solve problems in general has overall pretty decent incentives and allows us to channel a lot of people’s motivations about political topics into stuff that is useful in a broader context. (And I think some of Sarah Constantin’s stuff is a good example of ideas that I found useful completely separate from the political context and where I am quite glad she tried to abstract them away from the local political context that probably made her motivated to think about those things)
unless [...] categorization somehow is important to LW posts
Categorization is hugely relevant to Less Wrong! We had a whole Sequence about this!
Of course, it would be preferable to talk about the epistemology of categories with non-distracting examples if at all possible. One traditional strategy for avoiding such distractions is to abstract the meta-level point one is trying to make into a fictional parable about non-distracting things. See, for example, Scott Alexander’s “A Parable on Obsolete Ideologies”, which isn’t actually about Nazism—or rather, I would say, is about something more general than Nazism.
Unfortunately, this is extremely challenging to do well—most writers who attempt this strategy fail to be subtle enough, and the parable falls flat. For this they deserve to be downvoted.
So I think my filter for “appropriate to LessWrong” is that it should be an abstraction and generalization, NOT a parable or obfuscation to a specific topic. If there is a clean mapping to a current hotbutton, the author should do additional diligence to find counterexamples (the cases where more categories are costly, or where some dimensions are important for some uses and not for others, so you should use tagging rather than categorization) in order to actually define a concept rather than just restating a preference.
I think it is worth pointing out explicitly (though I expect most readers noticed) that Dagon wrote “unless gender categorization is important” and Zack turned it into “unless … categorization is important” and then said “Categorization is hugely relevant”. And that it’s perfectly possible that (1) a general topic can be highly relevant in a particular venue without it being true that (2) a specific case of that general topic is relevant there. And that most likely Dagon was not at all claiming that categorization is not an LW-relevant topic, but that gender categorization in particular is a too-distracting topic.
(I am not sure I agree with what I take Dagon’s position to be. Gender is a very interesting topic, and would be even if it weren’t one that many people feel very strongly about, and it relates to many very LW-ish topics—including, as Zack says, that of categorization more generally. Still, it might be that it’s just too distracting.)
The right word to elide from my objection would be “categorization”—I should have said “unless gender is important”, as that’s the political topic I don’t think we can/should discuss here. Categorization in mathematical abstraction is on-topic, as would be a formal definition/mapping of a relevant category to mathematically-expressible notation.
Loose, informal mappings of non-relevant topics is not useful here.
And honestly, I’m not sure how bright my line is—I can imagine topics related to gender or other human relationship topics that tend to bypass rationality being meta-discussed here, especially if it’s about raising the sanity waterline on such topics, and how to understand what goes wrong when they’re discussed at the object level. I doubt we’d get good results if we had direct object-level debates or points made here on those topics.
I think I roughly agree with this, though the LW team definitely hasn’t discussed this at length yet, and so this is just my personal opinion until I’ve properly checked in with the rest of the team.
Can you say more? What should the description say instead? (I’m guessing you’re referring to the fact that the post has some subtext that probably isn’t a good topic fit for Less Wrong? But I would argue that the text (using the blegg/rube parable setting to make another point about the cognitive function of categorization) totally is relevant and potentially interesting!)
“Fanfiction for the blegg/rube parable” and “to make another point about the cognitive function of categorization” are both completely ignoring the very large elephant in the rather small room.
The actual topic of the piece is clearly the currently hot topic of How To Think About Trans People. (Words like “trans” and “gender” are never mentioned, but it becomes obvious maybe four or five paragraphs in.) Which is a sufficiently mindkilling topic for sufficiently many people that maybe it’s worth mentioning.
(Or maybe not; you might argue that actually readers are more likely to be able to read the thing without getting mindkilled if their attention isn’t drawn to the mindkilling implications. But I don’t think many of those likely to be mindkilled will miss those implications; better to be up front about them.)
When I first read the post, I did not notice any reference to any mindkilling topics and was actually quite confused and surprised when I saw the comments about all of this being about something super political, and still found the post moderately useful. So I do think that I am a counterexample to your “I don’t think many of those likely to be mindkilled will miss those implications; ” argument.
I’m not sure you are, since it seems you weren’t at all mindkilled by it. I could be wrong, though; if, once you saw the implications, it took nontrivial effort to see past them, then I agree you’re a counterexample.
… you’re right. (I like the aesthetics of the “deniable allegory” writing style, but delusionally expecting to get away with it is trying to have one’s cake and eat it, too.) I added a “Content notice” to the description here.
I know it’s rather a side issue, but personally I hate the “deniable allegory” style, though LW is probably a better fit for it than most places …
1. The temptation to say literally-X-but-implying-Y and then respond to someone arguing against Y with “oh, but I wasn’t saying that at all, I was only saying X; how very unreasonable of you to read all that stuff into what I wrote!” is too often too difficult to resist.
2. Even if the deniable-allegorist refrains from any such shenanigans, the fear of them (as a result of being hit by such things in the past by deniable allegorists with fewer scruples) makes it an unpleasant business for anyone who finds themselves disagreeing with any of the implications.
3. And of course the reason why that tactic works is that often one does misunderstand the import of the allegory; a mode of discussion that invites misunderstandings is (to me) disagreeable.
4. The allegorical style can say, or at least gesture towards, a lot of stuff in a small space. This means that anyone trying to respond to it in literal style is liable to look like an awful pedant. On the other hand, if you try to meet an allegory with another allegory, (a) that’s hard to do well and (b) after one or two rounds the chances are that everyone is talking past everyone else. Which might be fun but probably isn’t productive.
Thanks. In retrospect, possibly a better approach for this venue would have been to carefully rewrite the piece for Less Wrong in a way that strips more subtext/conceals more of the elephant (e.g., cut the “disrespecting that effort” paragraph).
I think, to make it work for my conception of LW, you’d also want to acknowledge other approaches (staying with 2 categories and weighting the attributes, staying with 2 categories and just acknowledging they’re imperfect, giving up on categories and specifying attributes individually, possibly with predictions of hidden attributes, adding more categories and choosing based on the dimension with biggest deviation from average, etc.), and identify when they’re more appropriate than your preferred approach.
WTF. I didn’t downvote (until now), but didn’t see any point to so many words basically saying “labels are lossy compression, get over it”.
Now that I actually notice the website name and understand that it’s an allegory for a debate that doesn’t belong here (unless gender categorization somehow is important to LW posts), I believe it also doesn’t belong here. I believe that it doesn’t belong here regardless of which side I support (and I don’t have any clue what the debate is, so I don’t know what the lines are or which side, if any, I support).
Quick note that the mod team had been observing this post and the surrounding discussion and not 100% sure how to think about it. The post itself is sufficiently abstracted that unless you’re already aware of the political discussion, it seemed fairly innocuous. Once you’re aware of the political discussion it’s fairly blatant. It’s unclear to me how bad this is.
I do not have much confidence in any of the policies we could pick and stick to here. I’ve been mostly satisfied with the resulting conversation on LW staying pretty abstract and meta level.
Perhaps also worth noting: I was looking through two other recent posts, Tale of Alice Almost and In My Culture, through a similar lens. They each give me the impression that they are relating in some way to a political dispute which has been abstracted away, with a vague feeling that the resulting post may somehow still be a part of the political struggle.
I’d like to a have a moderation policy (primarily about whether such posts get frontpaged) that works regardless of whether I actually know anything about any behind-the-scenes drama. I’ve mulled over a few different such policies, each of which would result in different outcomes as to which of the three posts would get frontpaged. But in each case the three posts are hovering near the edge of however I’d classify them.
(The mod team was fairly divided on how important a lens this was and/or exactly how to think about, so just take this as my own personal thoughts for now)
My current model is that I am in favor of people trying to come up with general analogies, even if they are in the middle of thinking about mindkilling topics. I feel like people have all kinds of weird motivations for writing posts, and trying to judge and classify based on them is going to be hard and set up weird metacognitive incentives, whereas just deciding whether something is useful for trying to solve problems in general has overall pretty decent incentives and allows us to channel a lot of people’s motivations about political topics into stuff that is useful in a broader context. (And I think some of Sarah Constantin’s stuff is a good example of ideas that I found useful completely separate from the political context and where I am quite glad she tried to abstract them away from the local political context that probably made her motivated to think about those things)
Categorization is hugely relevant to Less Wrong! We had a whole Sequence about this!
Of course, it would be preferable to talk about the epistemology of categories with non-distracting examples if at all possible. One traditional strategy for avoiding such distractions is to abstract the meta-level point one is trying to make into a fictional parable about non-distracting things. See, for example, Scott Alexander’s “A Parable on Obsolete Ideologies”, which isn’t actually about Nazism—or rather, I would say, is about something more general than Nazism.
Unfortunately, this is extremely challenging to do well—most writers who attempt this strategy fail to be subtle enough, and the parable falls flat. For this they deserve to be downvoted.
So I think my filter for “appropriate to LessWrong” is that it should be an abstraction and generalization, NOT a parable or obfuscation to a specific topic. If there is a clean mapping to a current hotbutton, the author should do additional diligence to find counterexamples (the cases where more categories are costly, or where some dimensions are important for some uses and not for others, so you should use tagging rather than categorization) in order to actually define a concept rather than just restating a preference.
I think it is worth pointing out explicitly (though I expect most readers noticed) that Dagon wrote “unless gender categorization is important” and Zack turned it into “unless … categorization is important” and then said “Categorization is hugely relevant”. And that it’s perfectly possible that (1) a general topic can be highly relevant in a particular venue without it being true that (2) a specific case of that general topic is relevant there. And that most likely Dagon was not at all claiming that categorization is not an LW-relevant topic, but that gender categorization in particular is a too-distracting topic.
(I am not sure I agree with what I take Dagon’s position to be. Gender is a very interesting topic, and would be even if it weren’t one that many people feel very strongly about, and it relates to many very LW-ish topics—including, as Zack says, that of categorization more generally. Still, it might be that it’s just too distracting.)
The right word to elide from my objection would be “categorization”—I should have said “unless gender is important”, as that’s the political topic I don’t think we can/should discuss here. Categorization in mathematical abstraction is on-topic, as would be a formal definition/mapping of a relevant category to mathematically-expressible notation.
Loose, informal mappings of non-relevant topics is not useful here.
And honestly, I’m not sure how bright my line is—I can imagine topics related to gender or other human relationship topics that tend to bypass rationality being meta-discussed here, especially if it’s about raising the sanity waterline on such topics, and how to understand what goes wrong when they’re discussed at the object level. I doubt we’d get good results if we had direct object-level debates or points made here on those topics.
I think I roughly agree with this, though the LW team definitely hasn’t discussed this at length yet, and so this is just my personal opinion until I’ve properly checked in with the rest of the team.