Yes, also see my 2017 post Guided Mental Change Requires High Trust.
namespace
I think it’s a sort of Double Entendre? It’s also possible the author didn’t actually read Zvi’s post in the first place. This is implied by the following:
Slack is a nerd culture concept for people who subscribe to a particular attitude about things; it prioritizes clever laziness over straightforward exertion and optionality over firm commitment.
In the broader nerd culture, slack is a thing from the Church of the Subgenius, where it means something more like a kind of adversarial zero sum fight over who has to do all the work. In that context, the post title makes total sense.
For an example of this, see: https://en.wikipedia.org/wiki/Chez_Geek
I was about to write up some insight porn about it, and then was like “you know, Raemon, you should probably actually think about about this for real, since it seems like Pet Psychology Theories are one of the easier ways to get stuck in dumb cognitive traps.”
Thank you. I’m really really sick of seeing this kind of content on LW, and this moment of self reflection on your part is admirable. Have a strong upvote.
Thanks for inspiring GreaterWrong’s new ignore feature.
For what it’s worth, I don’t feel like ‘escalation spiral’ is particularly optimal. The concept you’re going for is hard to compress into a few words because there are so many similar things. It was just the best I could come up with without spending a few hours thinking about it.
“Uphill battle” is a standard English idiom, such idioms are often fairly nonsensical if you think about them hard enough (e.g, “have your cake and eat it too”), but they get a free pass because everyone knows what they mean.
and one feature of the demon thread is ‘everyone is being subtly warped into more aggressive, hostile versions of themselves’
See that’s obvious in your mind, but I don’t think it’s obvious to others from the phrase ‘demon thread’. In fact, hearing it put like that the name suddenly makes much more sense! However, it would never be apparent to me from hearing the phrase. I would go for something like “Escalation Spiral” or “Reciprocal Misperception” or perhaps “Retaliation Bias”.
One thing I like to do before I pick a phrase in this vein, is take the most likely candidates and do a survey with people I know where I ask them, before they know anything else, what they think when they hear the phrase. That’s often steered me away from things I thought conveyed the concept well but actually didn’t.
That post is a fairly interesting counterargument, thanks for linking it. This passage would be fun to try out:
This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hanging out with a particular person or small group. When you have a concept to explore, you’d grab an unused toy that seemed to suit it decently well, and then you’d gesture with it while explaining the concept. Then later you could refer to “the sparkly pink ball thing” or simply “this thing” while gesturing at the ball. Possibly, the other person wouldn’t remember, or not immediately. But if they did, you could be much more confident that you were on the same page. It’s a kind of shared mnemonic handle.
My problem with s1 and s2 is that it’s very difficult to remember which is which unless you’ve had it reinforced a bunch of times to remember. I tend to prefer good descriptive names to nondescript ones, but certainly nondescriptive names are better than bad names which cause people to infer meaning that isn’t there.
Most people don’t learn jargon by reading the original source for a term or phrase, they learn it from other people. Therefore one of the best ways to stop your jargon from being misused is to coin it in such a way that the jargon is a compressed representation of the concept it refers to. Authors in this milieu tend to be really bad at this. You yourself wrote about the concept of a ‘demon thread’, which I would like to (playfully) nominate for worst jargon ever coined on LessWrong. Its communicated meaning without the original thread boils down to ‘bad thread’ or ‘unholy thread’, which means that preserving the meaning you wanted it to have is a multi-front uphill battle in snow.
Another awful example from the CFAR handbook is the concept of ‘turbocharging’, which is a very specific thing but the concept handle just means ‘fast electricity’ or ‘speedy movement’. Were it not for the context, I wouldn’t know it was about learning at all. Even when I do have that context, it isn’t clear what makes it ‘turbo’. If it were more commonly used it would be almost instantly diluted without constant reference back to the original source.
For a non-LessWrong example, consider the academic social justice concept of ‘privilege’, which has (or had) a particular meaning that was useful to have a word for. However mainstream political commentary has diluted this phrase almost to the point of uselessness, making it a synonym for ‘inequality’.
It’d be interesting to do a study of say, 20-50 jargon terms and see how much level of dilution corresponds to degree-of-self-containment. In any case I suspect that trying to make jargon more self contained in its meaning would reduce misuse. “Costly Signaling” is harder to misinterpret than “Signaling”, for example.
I like the spirit of this post, but think I object to considering this ‘too smart for your own good’. That framing feels more like an identity-protecting maneuver than trying to get at reality. The reality is that you think you’re smarter than you are, and it causes you to trip over your untied shoelaces. You acknowledge this of course, but describing it accurately seems beyond your comfort zone. The closest you get is when you put ‘smart’ in scare quotes near the end of the essay.
Just be honest with yourself, it hurts at first but the improvement in perspective is massive.
You have the year wrong in the title.
It’s been a classic guideline of the site for a long time, that you should avoid the word ‘rational’ or ‘rationalist’ in titles as an adjective to describe stuff. In the interest of avoiding a repeat of the LW 1 apocalypse, I (and probably others) would really appreciate if you changed it.
Suggested feature: adding a “link option” to answers. I’m not sure what this is actually called, but it’s a feature that comments have. For example, here is a link to this comment.
This is generally called a permalink.
I think my broader response to that is “Well, if I could change one thing about LW 2 it would be the moderation policy.”
That seems strictly off topic though, so I’ll let it be what it is.
My Complaint: High Variance
Well, to put it delicately the questions have seemed high variance when it comes to quality.
That is the questions posed have been either quite good or stunningly mediocre with little in between.
3 examples of good questions
https://www.greaterwrong.com/posts/8EqTiMPbadFRqYHqp/how-old-is-smallpox
https://www.lesswrong.com/posts/Xt22Pqut4c6SAdWo2/what-self-help-has-helped-you
3 examples of not as good questions
I’d prefer to be gentle when listing examples of not-so-good questions, but a few I think are unambiguously in this category are:
(No clarification given in post, whole premise is kind of odd)
https://www.lesswrong.com/posts/TKHvBXHpMakRDqqvT/in-what-ways-are-holidays-good
(Bizarre, alien perspective. If I were a visitor and I saw this post I would assume the forum is an offshoot of Wrong Planet )
https://www.lesswrong.com/posts/AAamNiev4YsC4jK2n/sunscreen-when-why-why-not
(I don’t quite understand what the warrant is for discussing this on LW. Yes it’s a decision, which involves risk, but lots of things in our lives are decisions involving risk. If those are the only criteria for discussion I don’t really see any reason why we should be discussing rationality-per-se as opposed to the thousands of little things like this we face throughout our life.)
What I Would Like To See
Personally I think that it would help if you clarified the purpose and scope of the questions feature. What sort of questions should people be asking, what features make a good question, some examples of well posed questions, etc. Don’t skimp on this or chicken out. Good principles should exclude things, they should even exclude some things which would be net positive value to discuss! This is in the interest of keeping net negative gray areas from dominating to preserve positive edge cases.
That is to say, I want some concrete guidelines I can point to and say “Sorry but this question doesn’t seem appropriate for the site.” or “Right now this question isn’t the best it could be, some ways you could improve it to be more in line with our community policy is...”
1987 Sci-Fi Authors Timecapsule Predictions For 2012
The official LessWrong 2 server is pretty heavy, so running it locally might be a problem for some people.
Whistling Lobsters 2.0 uses a clone of the LW 2 API called Accordius as its backend. Accordius is, with some minor differences, nearly an exact copy of the pre-October LW 2 API. It was developed with the goal that you could put the GreaterWrong software in front of it and it would function without changes. Unfortunately due to some implementation disagreements between Graphene and the reference GraphQL library in JavaScript, it’s only about 95% compatible at the time of cloning.
Still, this thing will run on a potato (or more specifically, my years-old Intel atom based netbook) with GreaterWrong running on the same box as the front end. That makes it a pretty good option for anyone who’s looking to understand GraphQL and the LW 2 API. This implementation does not take into account the changes made in the big API update in October. As a consequence, it may be more useful at this point for learning GraphQL than the LW 2 API specifically.
(Note to future readers: The GraphQL API is considered legacy for Accordius in the long term, so if you’re reading this many months or even years from now, you may have to go back to the first alpha releases to get the functionality described here. Pre 1.0 perhaps.)
A great deal of my affection for hackers comes from the unique way they bridge the world of seeking secrets about people and secrets about the natural world. This might seem strange, since the stereotype is that hackers are lonely people that are alienated from others, but this is only half truth. In both the open source MIT tradition and the computer intrusion phone phreaking tradition, the search for secrets and excellence are paramount but fellow travelers are absolutely welcome on the journey. Further, much of even the ‘benign’ hacking tradition relies on the manipulation of social reality, the invisible relationships between people and symbols and things that are obvious to us but might confuse a visitor from Mars. For example, this story from the Jargon File about sneaking a computer into a hospital exemplifies the nature of social reality well. In Sister Y’s essay she hypothesizes that nerds are people who have a natural ability to see the underlying mechanisms of social reality in a way that is invisible to most people. Mostly through their natural inability to understand it in one way or another. Things that normal people take for granted confuse nerds, which provides the impetus for making discoveries about social reality itself.
A dictionary definition might be something like:
The map of the world which is drawn by our social-cultural universe, and its relationship to the standard protocols of societal interaction & cooperation. Implicit beliefs found in our norms & behavior toward others, as expressed through: coercive norms, rituals, rank, class, social status, authority, law, and other human coordination constructs.
One aspect of social reality is the offsets between our shared map and the territory. In many old African regional faiths, it was thought to be necessary for commoners to be kept away from upper class shamans and wizards. Otherwise their influence might damage their powers, or cause them to lose emotional control and damage the community. The idea that these people have magic powers and must be protected, along with the social norms and practices that arise from that is an example of social reality. It has very little to do with any real magic powers, but clearly there was some in-territory sequence of events that got everyone to decide to interpret the world this way.
This foreign, ancient example is useful because you have no emotional attachment to it, so you’re in a position to evaluate it objectively. Ask yourself how people might react to a lower class person that insisted on touching the magic king. What about someone who refused to recant their belief that the magic king had no influence on the weather? As you imagine the reactions, consider what things in your own social sphere or society would be met with similar feelings from others. Then ask yourself if they’re a human universal, or something that could theoretically be different if people felt differently. Once you’ve identified a handful of these you’re on your way to examining social reality as a phenomena. I suggest you keep most of these thoughts to yourself, for your own protection.
Another aspect is the invisible models and expectations of others. In the Jargon File example above, the guard has been told that his role is to prevent unauthorized items from entering the building. This role is very much real, and its “procedures” are as rote and trickable as any computer program. As Morpheus tells us:
This is a sparring program, similar to the programmed reality of The Matrix. It has the same basic rules, rules like gravity. What you must learn is that these rules are no different than the rules of a computer system. Some of them can be bent. Others, can be broken.
A great deal of the phone phreaking tradition is about running a wedge into the places where social reality and the territory don’t meet, and performing wild stunts based on them. For example, did you know that one of the most common attacks against locks is to just order a second lock because they’re keyed-alike?
The big difference of course is that when you trick a computer program, it doesn’t notice. Humans are very likely to notice you tricking them if you violate their expectations. So the art of social engineering is a very different realm in that respect, the technical complexity is lower but the solution space is narrowed by what people won’t perceive as too strange. It engages your EQ, at least as much as it engages your IQ.
----
Some book recommendations for a better sense:
Ghost In The Wires by Kevin Mitnick
The Challenger Launch Decision by Diane Vaughan
The Righteous Mind by Jonathon Haidt
I think users that are used to Markdown will often use single bold words as heading, and I feel hesitant to deviate too much from the standard Markdown conventions of how you should parse Markdown into HTML.
Don’t know where you got this notion from, but absolutely not. Markdown has syntax that’s used for headings, and I’ve never used bolded text as a replacement for a proper heading.
(As a wider point, Said Achmiz is as usual correct in his approach and it would be much appreciated if you didn’t inflict any more appalling HTML practices on API consumers)
I don’t think it’s really “a few people singing songs together”. It’s more like...an overall shift in demographics, tone, and norms. If I had to put it succinctly, the old school LessWrong was for serious STEM nerds and hard science fiction dorks. It was super super deep into the whole Shock Level memeplex thing. Over time it’s become a much softer sort of fandom geek thing. Rationalist Tumblr and SlateStarCodex aren’t marginal colonies, they’re the center driving force behind what’s left of the original ‘LessWrong rationality movement’. Naturally, a lot of those old guard members find this abhorrent and have no plans to ever participate in it.
I don’t blame them.