Flipping this around: this seems like yet another data point in favor of investing at least moderately in signalling. Heuristically, people won’t distinguish your lack of caring-about-signalling from lack of ability-to-signal.
Kisil
Sure. The biggest one is that when someone has poor social skills, we treat that as a thing to tolerate rather than as a thing to fix. E.g. someone shows up to a meetup and doesn’t really get how conversation flow works, when it’s time to talk and when it’s time to listen, how to tell the difference between someone being interested in what ze has to say and someone just being polite. We’re welcoming, at least outwardly, and encourage that person to keep showing up, so ze does. And the people who are both disinclined to be ranted to and who have the social skills to avoid the person learn to do so, but we don’t seem to make any effort to help the person become less annoying. So ze continues to inflict zirself on newcomers who haven’t learned better, and they walk away with the impression that that’s what our community is.
Which is sad, because we spend plenty of time encouraging self-improvement in thinking skills. If we siphoned some effort from “notice you’re confused” to “notice your audience”, we should be able to encourage self-improvement in social skills as well. But since we don’t treat it like something fixable, it doesn’t get fixed.
2a here seems like a major issue to me. I’ve had an essay brewing for a couple of months, about how the range of behaviors we tolerate affects who is willing to join the community. It’s much easier to see the people who join than the people who are pushed away.
I argue that the way we are currently inclusive goes beyond being a safe space for weirdness, and extends into being anti-normal in a way that frightens off anyone who already has strong mainstream social skills. And that we can and should encourage social skill development while remaining a safe space.
If there’s interest, I’ll finish writing the longer-form argument.
This crystallization really resonated with me. I’ve recently noticed a social norms divide, where some people seem to perceive requests for more information as hostile (attacking their status), rather than as a sign of interest. “I do not understand your world view, tell me more” can translate as “I like you and am interested in understanding you better”, or as “you are obviously wrong, please show me some weakness so that I can show how much smarter I am.” Or related, consider:
A: I’m working on X.
B: I’ve heard Y about X, what do you think?
Is B mentioning Y a sign of belonging to A’s in[terest]-group, and a bid for closeness? Or is B bidding for status, trying to show how much better informed B is?
Obviously I’ve removed all the interesting subtlety from my examples here, and it’s easy to imagine a conversation such that the hypothetical questions have obvious answers. It’s also possible for B to be unambiguous in one direction or the other—this is a useful social skill. My point is that there’s also overlap, where B intends to bid for closeness, but is interpreted as bidding for status. And that’s a function of A’s assumptions, not just about B but about how interactions in general are supposed to be structured.
“Comment epistemic status” would work.
I think I can make this! Any tips for identifying the group?
Data point: I would love to come to something like this, but I’m out of town.
Stop reading this.
Did you stop? So I don’t think the difficulty is avoiding compliance with commands in general. Rather, it’s switching between the mental modes of “complying” and “not complying” under time pressure.
I’m also going, and would also like to meet other LW-ers. Let’s wander towards Grendel’s Den around 6.
If a couple people reply to this, I’ll come up with more explicit logistics, but I can’t plan at 1am.
Hi.
I’ve posted comments twice, I think, but my read/write ratio is high enough that I think I still count here.
Thanks for writing this article. If the feedback helps, I found your self-disclosure much more illustrative than “gooey.”
I made a point of noting non-sadness deficiencies in my status
Did you formally track your mental state at any point? The luminosity series, among other things, has gotten me thinking about the fact that my overall historical impression of my mood status has been a pretty poor indicator of my day-to-moment mood status. I can get fuzzy snapshots by reading through my scattered past writing, but am missing a lot of data. So I’ve been working on a system to do some more regular, granular tracking, and I wondered if others here have a) found tracking effective at all, b) particular levels of granularity met your needs, c) found any particular system or tool helpful. (I’m considering building a tool for this process if I can’t find one, and would happily share here if there’s interest.)
Edit: This seems unclear on a re-read. I meant to say my general impression of my past moods approximates the sum of my moment-to-moment moods poorly; I’m planning to take data to get a more accurate estimate.
An excellent parable! The argument against the Nazism meme is quite well laid out, though Colonel Frank comes off a bit like a straw man.
Unfortunately, I think this argument misses the main difficulty the General faces. It’s easy to see that it would be better to replace the movement with something more sound. But policy makers cannot simply decide whether to preserve the Nazism meme. Actually changing dominant cultural ideas is a tremendously difficult problem, especially when the belief framework includes a sense of persecution. You cannot simply make it disappear by banning swastikas and slapping pro-democracy slogans on public transit vehicles.
The demise of the Nazi ideology may hold important lessons about effecting cultural change. There is still tremendous guilt about the Nazi movement in the German psyche—they even avoid the word, preferring the abbreviation NS. What happened to cause such a rapid and thorough reversal? Was it simply the revelation of the massive atrocities? I don’t know, but I don’t think this cause is by itself sufficient. Clearly many factors influenced the fall of Fascism; I would be curious to see this reversal studied in depth.
On a related note, all discussions of religion are now over, and we have lost. Does it make it better that Yvain knew it going in?
I may be late in the game here, but I found this chapter much less effective than the previous four, and I updated hard from “This book might resonate outside the LW community” towards “This will definitely not resonate outside the LW community.” Maybe the community is the target, full stop, but that seems unnecessarily modest. The thing that most bothered me was that the conversations are full of bits that feel like Eliezer unnecessarily personalizing, which reads like bragging, e.g.:
I’m having a hard time describing exactly why I found these so off-putting, but I think it has to do with the ways LW gets described as a cult. The more I think about it, the more I think that this is a problem with the framing of conversations in the first place: it’s hard to avoid looking like a git when you pick three examples of being smarter than other people, even with the caveat up front that you know you’re being unfair about it.
This chapter also felt much more densely packed with Rationalist vernacular, e.g. “epistemic harm”, “execute the ‘[...]’ technique”, “obvious failure mode”, “doing the hard cognitive labor”, and to a lesser extent, Silicon Valley vernacular (most of part iii). Sometimes you have to introduce new terms, but each time burns inferential distance points, and sometimes even weirdness points.