This archetype is easily distractible and does not cooperate with other instances of itself, so an entire community of people conforming to this archetype devolves into valuing abstraction and specialized jargon over solving problems.
Obviously there are exceptions to this, but as a first pass this seems pretty reasonable. For example, one thing I feel is going on with a lot of posts on LessWrong and posts in the rationalist diaspora is an attempt to write things the way Eliezer wrote them, specifically with a mind to creating new jargon to tag concepts.
My suspicion is that people see that Eliezer gained a lot of prestige via his writing, this is one of the things he does in his writing (name concepts with unusual names), and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.
I don’t have a lot of evidence to back this up, other than to say I’ve caught myself having the same temptation at times, and I’ve thought a bit about this common pattern I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don’t see it as much in other writing communities.
My suspicion is that people see that Eliezer gained a lot of prestige via his writing … and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.
I’d like to emphasize the idea “people try to copy Eliezer”, separately from the “naming new concepts” part.
It was my experience from Mensa that highly intelligent people are often too busy participating at pissing contests, instead of actually winning at life by engaging in lower-status behaviors such as cooperation or hard work. And, Gods forgive me, I believed we (the rationalist community) were better than that. But perhaps we are just doing it in a less obvious way.
Trying to “copy Eliezer” is a waste of resources. We already have Eliezer. His online articles can be read by any number of people; at least this aspect of Eliezer scales easily. So if you are tempted to copy him anyway, you should consider the hypothesis that you actually try to copy his local status. You have found a community where “being Eliezer” is high-status, and you are unconsciously pushed towards increasing your status. (The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.)
Instead, the right thing to do is:
cooperate with Eliezer, especially if your skills complement his. (Question is, how good is Eliezer himself at this kind of cooperation. I am on the opposite side of the planet, so I have no idea.) Simply said, anything Eliezer needs to get done, but doesn’t have a comparative advantage at, if you do it for him, you free his hands and head to do things he actually excels at. Yes, this can mean doing low-status things. Again, the question is whether your are optimizing for your status, or something else.
try alternative approaches, where the rationalist community seems to have blind spots. Such as DragonArmy, which really challenged the local crab mentality. My great wish is to see other people build their own experiments on top of this one: to read Duncan’s retrospective, to make their own idea of “we want to copy this, we don’t want to copy that, and we want to introduce these new ideas”, and then go ahead and actually do it. And post their own retrospective, etc. So that finally we may find a working model of a rationalist community that actually wins at life, as a community. (And of course, anyone who tries this has to expect strong negative reactions.)
I strongly suspect that internet itself (the fact that rationalists often coordinate as an online community) is a negative pressure. Internet is inherently biased in favor of insight porn. Insights get “likes” and “shares”, verbal arguments receive fast rewards. The actions in real world usually take a lot of time, and thus don’t make a good online conversation. (Imagine that every few months you acquire one boring habit that makes you more productive, and as a cumulative result of ten such years you achieve your dreams. Impressive, isn’t it? Now imagine a blog, that every few months publishes a short article about the new boring habit. Such blog would be a complete failure.) I would expect rationalists living close to each other, and thus mostly interacting offline, to be much more successful.
The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.
The term post-rationalist was popularized by the diaspora map and not by people who see themselves as post-rationalists and wanted to distinguish themselves.
To the extent that there’s a new person who has a similar founder position right now that’s Scott Alexander and not anybody who self-identifies as post-rationalist.
The term post-rationalist was popularized by the diaspora map and not by people who see themselves as post-rationalists and wanted to distinguish themselves.
The post rats may not have popularised the term as well as Scott did, but I think that’s mostly just because Scott is way more popular than them.
To the extent that there’s a new person who has a similar founder position right now that’s Scott Alexander and not anybody who self-identifies as post-rationalist.
Well, the claim was about what the post rats were (consciously or not) trying to do, not about whether they were successful.
And I think Scott has rebranded the movement, in a relevant sense. There’s a lot of overlap, but SSC is its own thing, with its own spinoffs. E.g. I believe most SSC readers don’t identify as rationalists.
Will Newsome did you the term before, but I’m not aware of it being used to the extent that it’s worthwhile to speak of him as someone who planned on being seen as a founder. If that was his intention he would have written a lot more outside of IRC.
I agree with a bunch of these concerns. FWIW, it wouldn’t surprise me if the current rationalist community still behaviorally undervalues “specialized jargon”. (Or, rather than jargon, concept handles a la https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/.) I don’t have a strong view on whether rationalists undervalue of overvalue this kind of thing, but it seems worth commenting on since it’s being discussed a lot here.
When I observe the reasons people ended up ‘working smarter’ or changing course in a good way, it often involves a new lens they started applying to something. I think one of the biggest problems the rationalist community faces is a lack of dakka and a lack of lead bullets. But I guess I want to caution against treating abstraction and execution as too much of a dichotomy, such that we have to choose between “novel LW posts are useful and high-status” and “conscientiousness and follow-through is useful and high-status” and see-saw between the two.
The important thing is cutting the enemy, and I think the kinds of problems that rationalists are in an especially good position to solve require individuals to exhibit large amounts of execution and follow-through while (on a timescale of years) doing a large number of big and small course-corrections to improve their productivity or change their strategy.
It might be that we’re doing too much reflection and too much coming up with lenses. It might also be that we’re not doing enough grunt work and not doing enough reflection and lenscrafting. Physical tasks don’t care whether we’re already doing an abnormal amount of one or the other; the universe just hands us problems of a certain difficulty, and if we fall short on any of the requirements then we fail.
It might also be that this varies by individual, such that it’s best to just make sure people are aware of these different concerns so they can check which holds true in their own circumstance.
I’ve thought a bit about this common pattern [name concepts with unusual names] I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don’t see it as much in other writing communities.
I see the pattern a lot in “spiritual” writings. See, for example, the “Integral Spirituality” being discussed in another recent post.
One is that different spiritual traditions have their own deep, complex system of jargon that sometimes stretch back thousands of years through multiple translations, schisms, and acts of syncretism. So when you first encounter it you can feel like it’s a lot and it’s new and why can’t these people just talk normally.
Of course, most LW readers live in a world full of jargon even before you add on the LW jargon, much of it from STEM disciplines. People from outside that cluster feel much the same way about STEM jargon as the average LW reader may feel about spiritual jargon. I point this out merely because I realized, when you brought up the spiritual example, that I wasn’t given a full account of what’s different about rationalists, maybe, in that there’s a tendency to make new jargon even when a literature search would reveal existing jargon exists.
Which is relevant to your point and my second thought, which is that you are right, many things we might call “new age spirituality” have the exact same jargon-coining pattern in their writing as rationalist writing does, with nearly ever author striving to elevate some metaphor to the level of word so that it can becomes a part of a wider shared approach to ontology.
This actually seems to suggest then that my story is too specific and pointing to Eliezer’s tendency to do this as a cause is maybe unfair: it may be a tendency that exists within many people, and there is something similar about the kind of people or the social incentives that are similar between rationalists and new age spiritualists that produces this behavior.
I point this out merely because I realized, when you brought up the spiritual example, that I wasn’t given a full account of what’s different about rationalists, maybe, in that there’s a tendency to make new jargon even when a literature search would reveal existing jargon exists.
I don’t think this is different for STEM, or cognitive science, or self-help. After having studied both CS and Math and studied some physics in my off-time, everyone constantly invents new names for all the things. To give you a taste, the first paragraph from the Wikipedia article on Tikhonov regularization:
Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.
You will find the same pattern of lots of different names for the exact same thing in almost all statistical concepts in the Wikipedia series on statistics.
The color coding that was discussed there isn’t anything that the integral community came up with. Wilber looked around for existing paradigms of adult development and picked the one he liked best and took their terms.
I understand what Wilber knows when he says blue because I studied spiral dynamics in a context outside of Wilber’s work. It’s similar towards when rationalists take names of biases from the psychological literature that might not be known by wider society. It’s quite different from EY making up new terms.
Wilber’s whole idea about being integral is to take existing concepts from other domains.
Obviously there are exceptions to this, but as a first pass this seems pretty reasonable. For example, one thing I feel is going on with a lot of posts on LessWrong and posts in the rationalist diaspora is an attempt to write things the way Eliezer wrote them, specifically with a mind to creating new jargon to tag concepts.
My suspicion is that people see that Eliezer gained a lot of prestige via his writing, this is one of the things he does in his writing (name concepts with unusual names), and I suspect people make the (reasonable) assumption that if they do something similar maybe they will gain prestige from their writing targeted to other rationalists.
I don’t have a lot of evidence to back this up, other than to say I’ve caught myself having the same temptation at times, and I’ve thought a bit about this common pattern I see in rationalist writing and tried to formulate a theory of why it happens that accounts not only for why we see it here but also why I don’t see it as much in other writing communities.
I’d like to emphasize the idea “people try to copy Eliezer”, separately from the “naming new concepts” part.
It was my experience from Mensa that highly intelligent people are often too busy participating at pissing contests, instead of actually winning at life by engaging in lower-status behaviors such as cooperation or hard work. And, Gods forgive me, I believed we (the rationalist community) were better than that. But perhaps we are just doing it in a less obvious way.
Trying to “copy Eliezer” is a waste of resources. We already have Eliezer. His online articles can be read by any number of people; at least this aspect of Eliezer scales easily. So if you are tempted to copy him anyway, you should consider the hypothesis that you actually try to copy his local status. You have found a community where “being Eliezer” is high-status, and you are unconsciously pushed towards increasing your status. (The only thing you cannot copy is his position as a founder. To achieve this, you would have to rebrand the movement, and position yourself in the new center. Welcome, post-rationalists, et al.)
Instead, the right thing to do is:
cooperate with Eliezer, especially if your skills complement his. (Question is, how good is Eliezer himself at this kind of cooperation. I am on the opposite side of the planet, so I have no idea.) Simply said, anything Eliezer needs to get done, but doesn’t have a comparative advantage at, if you do it for him, you free his hands and head to do things he actually excels at. Yes, this can mean doing low-status things. Again, the question is whether your are optimizing for your status, or something else.
try alternative approaches, where the rationalist community seems to have blind spots. Such as Dragon Army, which really challenged the local crab mentality. My great wish is to see other people build their own experiments on top of this one: to read Duncan’s retrospective, to make their own idea of “we want to copy this, we don’t want to copy that, and we want to introduce these new ideas”, and then go ahead and actually do it. And post their own retrospective, etc. So that finally we may find a working model of a rationalist community that actually wins at life, as a community. (And of course, anyone who tries this has to expect strong negative reactions.)
I strongly suspect that internet itself (the fact that rationalists often coordinate as an online community) is a negative pressure. Internet is inherently biased in favor of insight porn. Insights get “likes” and “shares”, verbal arguments receive fast rewards. The actions in real world usually take a lot of time, and thus don’t make a good online conversation. (Imagine that every few months you acquire one boring habit that makes you more productive, and as a cumulative result of ten such years you achieve your dreams. Impressive, isn’t it? Now imagine a blog, that every few months publishes a short article about the new boring habit. Such blog would be a complete failure.) I would expect rationalists living close to each other, and thus mostly interacting offline, to be much more successful.
The term post-rationalist was popularized by the diaspora map and not by people who see themselves as post-rationalists and wanted to distinguish themselves.
To the extent that there’s a new person who has a similar founder position right now that’s Scott Alexander and not anybody who self-identifies as post-rationalist.
Here’s a 2012 comment (predating the map by two years) in which someone describes himself as a post-rationalist to distinguish himself from rationalists: https://www.lesswrong.com/posts/p5jwZE6hTz92sSCcY/son-of-shit-rationalists-say#ryJabsxh7m9TPocqS
The post rats may not have popularised the term as well as Scott did, but I think that’s mostly just because Scott is way more popular than them.
Well, the claim was about what the post rats were (consciously or not) trying to do, not about whether they were successful.
And I think Scott has rebranded the movement, in a relevant sense. There’s a lot of overlap, but SSC is its own thing, with its own spinoffs. E.g. I believe most SSC readers don’t identify as rationalists.
(“Rebranding” might be better termed “forking”.)
Will Newsome did you the term before, but I’m not aware of it being used to the extent that it’s worthwhile to speak of him as someone who planned on being seen as a founder. If that was his intention he would have written a lot more outside of IRC.
I agree with a bunch of these concerns. FWIW, it wouldn’t surprise me if the current rationalist community still behaviorally undervalues “specialized jargon”. (Or, rather than jargon, concept handles a la https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/.) I don’t have a strong view on whether rationalists undervalue of overvalue this kind of thing, but it seems worth commenting on since it’s being discussed a lot here.
When I observe the reasons people ended up ‘working smarter’ or changing course in a good way, it often involves a new lens they started applying to something. I think one of the biggest problems the rationalist community faces is a lack of dakka and a lack of lead bullets. But I guess I want to caution against treating abstraction and execution as too much of a dichotomy, such that we have to choose between “novel LW posts are useful and high-status” and “conscientiousness and follow-through is useful and high-status” and see-saw between the two.
The important thing is cutting the enemy, and I think the kinds of problems that rationalists are in an especially good position to solve require individuals to exhibit large amounts of execution and follow-through while (on a timescale of years) doing a large number of big and small course-corrections to improve their productivity or change their strategy.
It might be that we’re doing too much reflection and too much coming up with lenses. It might also be that we’re not doing enough grunt work and not doing enough reflection and lenscrafting. Physical tasks don’t care whether we’re already doing an abnormal amount of one or the other; the universe just hands us problems of a certain difficulty, and if we fall short on any of the requirements then we fail.
It might also be that this varies by individual, such that it’s best to just make sure people are aware of these different concerns so they can check which holds true in their own circumstance.
I see the pattern a lot in “spiritual” writings. See, for example, the “Integral Spirituality” being discussed in another recent post.
I have two thoughts on this.
One is that different spiritual traditions have their own deep, complex system of jargon that sometimes stretch back thousands of years through multiple translations, schisms, and acts of syncretism. So when you first encounter it you can feel like it’s a lot and it’s new and why can’t these people just talk normally.
Of course, most LW readers live in a world full of jargon even before you add on the LW jargon, much of it from STEM disciplines. People from outside that cluster feel much the same way about STEM jargon as the average LW reader may feel about spiritual jargon. I point this out merely because I realized, when you brought up the spiritual example, that I wasn’t given a full account of what’s different about rationalists, maybe, in that there’s a tendency to make new jargon even when a literature search would reveal existing jargon exists.
Which is relevant to your point and my second thought, which is that you are right, many things we might call “new age spirituality” have the exact same jargon-coining pattern in their writing as rationalist writing does, with nearly ever author striving to elevate some metaphor to the level of word so that it can becomes a part of a wider shared approach to ontology.
This actually seems to suggest then that my story is too specific and pointing to Eliezer’s tendency to do this as a cause is maybe unfair: it may be a tendency that exists within many people, and there is something similar about the kind of people or the social incentives that are similar between rationalists and new age spiritualists that produces this behavior.
I don’t think this is different for STEM, or cognitive science, or self-help. After having studied both CS and Math and studied some physics in my off-time, everyone constantly invents new names for all the things. To give you a taste, the first paragraph from the Wikipedia article on Tikhonov regularization:
You will find the same pattern of lots of different names for the exact same thing in almost all statistical concepts in the Wikipedia series on statistics.
The color coding that was discussed there isn’t anything that the integral community came up with. Wilber looked around for existing paradigms of adult development and picked the one he liked best and took their terms.
I understand what Wilber knows when he says blue because I studied spiral dynamics in a context outside of Wilber’s work. It’s similar towards when rationalists take names of biases from the psychological literature that might not be known by wider society. It’s quite different from EY making up new terms.
Wilber’s whole idea about being integral is to take existing concepts from other domains.