Yvain’s latest post at SSC is, among other things, about the dynamics of tribes:
Scholars call the process of creating a new tribe “ethnogenesis” … My model of ethnogenesis involves four stages: pre-existing differences, a rallying flag, development, and dissolution.
Homework assignment: apply the four-stage model to LessWrong.
Besides the triviality of everything having a beginning, a developement and an end, I found that the model is too simplicistic and already shows some crack when applied to LessWrong:
pre-existing difference: about LW there was only one man, Eliezer, who perceived the difference between what he considered a sane approach to AI and all the others approach;
then it came a blog about showing this approach and “raising the sanity waterline”, which I think created the difference in many of his followers, or at least attracted enough interest. In this case, the rallying flag created the difference in those who attended, which in turn created (or smoothed) more differences;
developement and end are mixed in this case, there was supposedly a peak and a denouement, but the site is still active, the tribe has fragmented and regrouped again.
there was only one man, Eliezer, who perceived the difference between what he considered a sane approach to AI and all the others approach
But the LW community was not really, in the first instance, built around Eliezer’s (or anyone’s) ideas about how to approach AI. It was built around his ideas about how to think rationally, and a lot of that existed before Eliezer wrote anything on the subject.
(I am not making claims about the originality or unoriginality of Eliezer’s writings about rationality. The point, with which I am absolutely sure he would agree, is that much of the difference between a typical LW rationalist and a typical non-rationalist lies in things that Eliezer did not invent and was not the first to write down.)
then it came a blog about showing this approach and “raising the sanity waterline”, which I think created the difference in many of his followers
That is… a very strong statement. You think that EY’s blog actually created the differences in the people who then coalesced into a tribe around their creator?
The dissolution stage is described in greater detail in the linked article. The presence of people who proudly say they never bothered to read the Sequences (available as a free book now) was a huge warning long ago, but we somehow bought the belief that caring about your garden is cultish. Well, the garden is quite trampled now.
I can imagine an improvement in creating specific subs for the “hardcore” topics.
EDIT:
I am not sure I understand Scott’s explanation for the dissolution phase. He seems to suggest that it happens when “a tribe was never really that different from the surrounding population, stops caring that much about its rallying flag, and doesn’t develop enough culture”. Yeah, but why does that happen?
Sometimes the difference really wasn’t so big. Imagine a minority that is not that much different from he majority, but is isolated by a language barrier, and maybe both sides have a habit of avoiding each other, which all contributes to creating myths about how the other side is completely weird. -- Then at some moment people start interacting with each other, the minority learns the majority language, and suddenly they all see they were quite similar. And then the old tribal boundaries dissolve, to be replaced by new boundaries, e.g. along hobbies or social class.
But I don’t think this aplies to LW. I mean, when I found LW, I was shocked to see that there actually exist people like me. (Hard to describe what exactly that means, other than “I know it when I see it”.) And now, a few years later, I still perceive the huge difference between me and most of the society.
However, now the LW website is not literally the only place where I can meet “LW-style” people, because the rationalist diaspora has grown, and now I can meet them e.g. at SSC. There are also the meetups, and there are people I have met on the meetups that I would stay in contact with even if the meetups would dissolve. So the LW website no longer has a monopoly on the “LW-style” people.
But there is also another way how people can find out that they are not “really that different from the surrounding population” and that they don’t care that much about their rallying flag… and that is when the community gets dilluted by the outsiders who never cared about the rallying flag, and who are closer to the general population than the old members. Then the community as a whole gets closer to the original population even if the original members didn’t.
This seems similar, but there is a difference. In the second model, there are the old members who still remain different, only their community was sabotaged by the new members who “came, saw, and conquered” (not necessarily by intention). Even if they would want to start over, now they have a coordination problem, because the original rallying flag is not a good Schelling point anymore, because people now associate it with the dilluted version of the community.
Unlike Scott’s explanation that people in the atheist community became bored with being only atheists, and decided to become SJWs instead because it seemed like more fun… I think it was actually the second kind of process. That the atheist community was joined by people who didn’t care about atheism that much (that’s not a strawman; some of them admitted it afterwards), and mostly saw it as a place where they could recruit for their own ideas. They came, converted a few members, tried to take over the whole community, found a resistance, created a schism, and now keep attacking the original group in frustration. So it’s not like the old-style atheists became bored with atheism, instead the boredom with atheism came from people who never strongly identified as atheists, except instrumentally for a short time during the takeover attempt.
So far LW was successful at holding off these kinds of attack (some people even doubt they actually happened). The actual danger for us comes from… not exactly “normies”, but rather from people somewhere on the scale between “LW-style” people and “normies”. There is no clear dividing lines. So while “normies” will avoid this site, it may be attractive to people who are only “90% LW-ish”… and if this is something like a bell curve, they will soon make a majority, then the site will become attractive to people who are “80% LW-ish”, etc. and then suddenly it is not the old community of “LW-style” people anymore, but it’s not obvious where the line should have been drawn, because the process was so fluent.
EDIT2:
What I mean by “X% LW-ish” is something like “I enjoy talking with the smart people whom I find on LW, and I find some of their topics quite interesting, but I don’t care about the artificial intelligence, and I am not that obsessed with increasing my rationality. I don’t have time to read Sequences, but here are some interesting links that I wanted to share, and I would also like to debate personal opinions on X, Y, and Z.” There is nothing wrong with that per se, and on some days I would enjoy that kind of debates, but I don’t want to see LW replaced by this. I would like to see that on a different website, or if that is not possible, at least on a different sub within LW.
I made a comment related to this on the SSC post about the rationalists I met in person in the Bay Area. I think it’s the continued and extended version of what you stated above with some people in the Bay Area calling themselves rationalists while being in the 20% LW-ish (or lower) crowd. I primarily focused on the overcoming biases and getting stronger parts.
“I witnessed some trends in rationalists during a visit in the Bay Area recently that make far more sense to me now when seen through the lens of your generation descriptions. The instrumental rationalists seemed to fit into 3 Generation type groups.
Generation 1 agreed with 50% or greater of The Sequences and attempt to use the ideas from it, CFAR, and other sources in their daily lives to improve themselves. They seemed to take all of it quite seriously.
Generation 2 possessed a mild respect for CFAR, less respect for The Sequences themselves (and likely read next to none of it), made sure to make a comment of disdain for EY almost as if it was a prerequisite to confirm tribe membership (maybe part of the “i’m not one of THOSE rationalists”?), and had a larger interest in books that their friends recommended for overall self-improvement.
Generation 3 hadn’t read any of The Sequences, had read only a few blog posts, loosely understood some of the terms being regularly thrown around (near/far mode, far mode, object level, inside/outside view, map/territory etc.) but didn’t know the definitions well enough to actually use the mental actions of the techniques themselves, and considered themselves rationalists via group affiliation, showing up to events, and having friendships rather than being rationalists due to becoming more rational themselves and attempting to optimize their own lives and brains.
I had limited exposure to the Bay Area and would be very interested if anyone else thinks these categories actually match the territory there. This also leaves out epistemic rationalists (some of whom I met) who don’t fit into the three generations presented above.”
It is interesting how a community built around the Sequences gradually changed into a community of people who treat mentioning the Sequences almost as a faux pas.
With the consequence that the ideas mentioned in the Sequences are more mentioned than used (well, those few of them that are mentioned at all), and rationality becomes a question of group affiliation.
There is an analogy with Christianity, except that what took Christianity 2000 years, we managed to achieve in 2000 days. Truly, the progress is accelerating exponencially, and the Singularity is near!
improving the world (“Friendly AI” + “raising the sanity waterline”)
The individual elements are already out there—various kinds of transhumanists and futurists; psychologists such as Kahneman; the whole self-improvement industry; and thousands of political or religious movements. But the problem is that self-improvement and world-changing movements are typically full of insanity. And dreaming about transhuman future is nice, but it’s not obvious how people like me would contribute.
So, speaking for myself, what I hear in the Sequences is:
“You can become stronger, find like-minded friends, improve the world, and ultimately bring the sci-fi future… without having to sacrifice your own sanity. Actually, being smart and sane will be helpful.”
(And the dissolution happens when people seem no longer interested in improving themselves, improving the world and bringing the sci-fi future; only in having a place to procrastinate by sharing news articles and nitpicking everything. Something like Mensa online.)
Loosely, “transhumanism”, or, more basically, a belief that “radical” self-improvement or self-alteration is possible and desirable. It is no coincidence that people who find the idea of uploading their minds to computers appealing might also enjoy “life hacks.” Both ideas involve self-modification. The very idea of “upgrading your rationality” presumes that a level of self-modification is possible, to an extent that a normal person might deny.
Interest in futurism, often in one utopian flavor or another. The concept of FAI turns bullshitting about the Singularity into something that feels like an actionable engineering problem rather than a purely sophistic exercise.
You could possibly draw a Venn diagram of three circles, labeled Futurism, Rationality, and Transhumanism. The three concepts overlap conceptually by default. The sweet spot where all three overlap contains the topics of FAI, Fun Theory, AI Risk in general.
Our propensity to subscribe to weird political theories can be viewed as the overlap between Futurism and Rationality, i.e. applying logical and dispassionate thinking toward social structures.
Our belief that it’s even possible (and desirable) to “raise the sanity waterline” lies at the intersection of Transhumanism and Rationality.
The overlap of Futurism and Transhumanism is too obvious to belabor.
This is a lot of words reiterating basically the idea of Eliezer’s Empirical Cluster in Personspace, which he defines extensively as “atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc”. But, I think a lot of our now-prominent diaspora bloggers don’t fit into that personspace very well, as he defined it.
So, if I had to really drill down to the crux of it, I would say the rallying flag looks something like a default disposition towards taking ideas seriously, plus an assumption that radical self-change is possible. Everything else just falls out of these psychological stances.
I think you’re describing the common interests of the tribe, but that’s a different thing than the rallying flag.
Since we’re operating within Yvain’s framework, we’ll use his definition which is
The rallying flag is the explicit purpose of the tribe. It’s usually a belief, event, or activity that get people with that specific pre-existing difference together and excited.
HPMoR, for example, is (was?) a rallying flag for a subset of the LW tribe. But I don’t think a “default disposition” would qualify (Yvain would call it a stage 1 “pre-existing difference”) and an “assumption” is very doubtful as well.
Yvain’s latest post at SSC is, among other things, about the dynamics of tribes:
Homework assignment: apply the four-stage model to LessWrong.
Besides the triviality of everything having a beginning, a developement and an end, I found that the model is too simplicistic and already shows some crack when applied to LessWrong:
pre-existing difference: about LW there was only one man, Eliezer, who perceived the difference between what he considered a sane approach to AI and all the others approach;
then it came a blog about showing this approach and “raising the sanity waterline”, which I think created the difference in many of his followers, or at least attracted enough interest. In this case, the rallying flag created the difference in those who attended, which in turn created (or smoothed) more differences;
developement and end are mixed in this case, there was supposedly a peak and a denouement, but the site is still active, the tribe has fragmented and regrouped again.
But the LW community was not really, in the first instance, built around Eliezer’s (or anyone’s) ideas about how to approach AI. It was built around his ideas about how to think rationally, and a lot of that existed before Eliezer wrote anything on the subject.
(I am not making claims about the originality or unoriginality of Eliezer’s writings about rationality. The point, with which I am absolutely sure he would agree, is that much of the difference between a typical LW rationalist and a typical non-rationalist lies in things that Eliezer did not invent and was not the first to write down.)
That is… a very strong statement. You think that EY’s blog actually created the differences in the people who then coalesced into a tribe around their creator?
What do you count as regrouping?
The dissolution stage is described in greater detail in the linked article. The presence of people who proudly say they never bothered to read the Sequences (available as a free book now) was a huge warning long ago, but we somehow bought the belief that caring about your garden is cultish. Well, the garden is quite trampled now.
I can imagine an improvement in creating specific subs for the “hardcore” topics.
EDIT:
I am not sure I understand Scott’s explanation for the dissolution phase. He seems to suggest that it happens when “a tribe was never really that different from the surrounding population, stops caring that much about its rallying flag, and doesn’t develop enough culture”. Yeah, but why does that happen?
Sometimes the difference really wasn’t so big. Imagine a minority that is not that much different from he majority, but is isolated by a language barrier, and maybe both sides have a habit of avoiding each other, which all contributes to creating myths about how the other side is completely weird. -- Then at some moment people start interacting with each other, the minority learns the majority language, and suddenly they all see they were quite similar. And then the old tribal boundaries dissolve, to be replaced by new boundaries, e.g. along hobbies or social class.
But I don’t think this aplies to LW. I mean, when I found LW, I was shocked to see that there actually exist people like me. (Hard to describe what exactly that means, other than “I know it when I see it”.) And now, a few years later, I still perceive the huge difference between me and most of the society.
However, now the LW website is not literally the only place where I can meet “LW-style” people, because the rationalist diaspora has grown, and now I can meet them e.g. at SSC. There are also the meetups, and there are people I have met on the meetups that I would stay in contact with even if the meetups would dissolve. So the LW website no longer has a monopoly on the “LW-style” people.
But there is also another way how people can find out that they are not “really that different from the surrounding population” and that they don’t care that much about their rallying flag… and that is when the community gets dilluted by the outsiders who never cared about the rallying flag, and who are closer to the general population than the old members. Then the community as a whole gets closer to the original population even if the original members didn’t.
This seems similar, but there is a difference. In the second model, there are the old members who still remain different, only their community was sabotaged by the new members who “came, saw, and conquered” (not necessarily by intention). Even if they would want to start over, now they have a coordination problem, because the original rallying flag is not a good Schelling point anymore, because people now associate it with the dilluted version of the community.
Unlike Scott’s explanation that people in the atheist community became bored with being only atheists, and decided to become SJWs instead because it seemed like more fun… I think it was actually the second kind of process. That the atheist community was joined by people who didn’t care about atheism that much (that’s not a strawman; some of them admitted it afterwards), and mostly saw it as a place where they could recruit for their own ideas. They came, converted a few members, tried to take over the whole community, found a resistance, created a schism, and now keep attacking the original group in frustration. So it’s not like the old-style atheists became bored with atheism, instead the boredom with atheism came from people who never strongly identified as atheists, except instrumentally for a short time during the takeover attempt.
So far LW was successful at holding off these kinds of attack (some people even doubt they actually happened). The actual danger for us comes from… not exactly “normies”, but rather from people somewhere on the scale between “LW-style” people and “normies”. There is no clear dividing lines. So while “normies” will avoid this site, it may be attractive to people who are only “90% LW-ish”… and if this is something like a bell curve, they will soon make a majority, then the site will become attractive to people who are “80% LW-ish”, etc. and then suddenly it is not the old community of “LW-style” people anymore, but it’s not obvious where the line should have been drawn, because the process was so fluent.
EDIT2:
What I mean by “X% LW-ish” is something like “I enjoy talking with the smart people whom I find on LW, and I find some of their topics quite interesting, but I don’t care about the artificial intelligence, and I am not that obsessed with increasing my rationality. I don’t have time to read Sequences, but here are some interesting links that I wanted to share, and I would also like to debate personal opinions on X, Y, and Z.” There is nothing wrong with that per se, and on some days I would enjoy that kind of debates, but I don’t want to see LW replaced by this. I would like to see that on a different website, or if that is not possible, at least on a different sub within LW.
I made a comment related to this on the SSC post about the rationalists I met in person in the Bay Area. I think it’s the continued and extended version of what you stated above with some people in the Bay Area calling themselves rationalists while being in the 20% LW-ish (or lower) crowd. I primarily focused on the overcoming biases and getting stronger parts.
It is interesting how a community built around the Sequences gradually changed into a community of people who treat mentioning the Sequences almost as a faux pas.
With the consequence that the ideas mentioned in the Sequences are more mentioned than used (well, those few of them that are mentioned at all), and rationality becomes a question of group affiliation.
There is an analogy with Christianity, except that what took Christianity 2000 years, we managed to achieve in 2000 days. Truly, the progress is accelerating exponencially, and the Singularity is near!
What do you think LW’s rallying flag is?
The combination of:
transhumanism (“Friendly AI”)
rationality (“overcoming biases”)
improving oneself (“becoming stronger”)
improving the world (“Friendly AI” + “raising the sanity waterline”)
The individual elements are already out there—various kinds of transhumanists and futurists; psychologists such as Kahneman; the whole self-improvement industry; and thousands of political or religious movements. But the problem is that self-improvement and world-changing movements are typically full of insanity. And dreaming about transhuman future is nice, but it’s not obvious how people like me would contribute.
So, speaking for myself, what I hear in the Sequences is:
“You can become stronger, find like-minded friends, improve the world, and ultimately bring the sci-fi future… without having to sacrifice your own sanity. Actually, being smart and sane will be helpful.”
(And the dissolution happens when people seem no longer interested in improving themselves, improving the world and bringing the sci-fi future; only in having a place to procrastinate by sharing news articles and nitpicking everything. Something like Mensa online.)
Candidates:
Loosely, “transhumanism”, or, more basically, a belief that “radical” self-improvement or self-alteration is possible and desirable. It is no coincidence that people who find the idea of uploading their minds to computers appealing might also enjoy “life hacks.” Both ideas involve self-modification. The very idea of “upgrading your rationality” presumes that a level of self-modification is possible, to an extent that a normal person might deny.
Interest in futurism, often in one utopian flavor or another. The concept of FAI turns bullshitting about the Singularity into something that feels like an actionable engineering problem rather than a purely sophistic exercise.
You could possibly draw a Venn diagram of three circles, labeled Futurism, Rationality, and Transhumanism. The three concepts overlap conceptually by default. The sweet spot where all three overlap contains the topics of FAI, Fun Theory, AI Risk in general.
Our propensity to subscribe to weird political theories can be viewed as the overlap between Futurism and Rationality, i.e. applying logical and dispassionate thinking toward social structures.
Our belief that it’s even possible (and desirable) to “raise the sanity waterline” lies at the intersection of Transhumanism and Rationality.
The overlap of Futurism and Transhumanism is too obvious to belabor.
This is a lot of words reiterating basically the idea of Eliezer’s Empirical Cluster in Personspace, which he defines extensively as “atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc”. But, I think a lot of our now-prominent diaspora bloggers don’t fit into that personspace very well, as he defined it.
So, if I had to really drill down to the crux of it, I would say the rallying flag looks something like a default disposition towards taking ideas seriously, plus an assumption that radical self-change is possible. Everything else just falls out of these psychological stances.
I think you’re describing the common interests of the tribe, but that’s a different thing than the rallying flag.
Since we’re operating within Yvain’s framework, we’ll use his definition which is
HPMoR, for example, is (was?) a rallying flag for a subset of the LW tribe. But I don’t think a “default disposition” would qualify (Yvain would call it a stage 1 “pre-existing difference”) and an “assumption” is very doubtful as well.