improving the world (“Friendly AI” + “raising the sanity waterline”)
The individual elements are already out there—various kinds of transhumanists and futurists; psychologists such as Kahneman; the whole self-improvement industry; and thousands of political or religious movements. But the problem is that self-improvement and world-changing movements are typically full of insanity. And dreaming about transhuman future is nice, but it’s not obvious how people like me would contribute.
So, speaking for myself, what I hear in the Sequences is:
“You can become stronger, find like-minded friends, improve the world, and ultimately bring the sci-fi future… without having to sacrifice your own sanity. Actually, being smart and sane will be helpful.”
(And the dissolution happens when people seem no longer interested in improving themselves, improving the world and bringing the sci-fi future; only in having a place to procrastinate by sharing news articles and nitpicking everything. Something like Mensa online.)
Loosely, “transhumanism”, or, more basically, a belief that “radical” self-improvement or self-alteration is possible and desirable. It is no coincidence that people who find the idea of uploading their minds to computers appealing might also enjoy “life hacks.” Both ideas involve self-modification. The very idea of “upgrading your rationality” presumes that a level of self-modification is possible, to an extent that a normal person might deny.
Interest in futurism, often in one utopian flavor or another. The concept of FAI turns bullshitting about the Singularity into something that feels like an actionable engineering problem rather than a purely sophistic exercise.
You could possibly draw a Venn diagram of three circles, labeled Futurism, Rationality, and Transhumanism. The three concepts overlap conceptually by default. The sweet spot where all three overlap contains the topics of FAI, Fun Theory, AI Risk in general.
Our propensity to subscribe to weird political theories can be viewed as the overlap between Futurism and Rationality, i.e. applying logical and dispassionate thinking toward social structures.
Our belief that it’s even possible (and desirable) to “raise the sanity waterline” lies at the intersection of Transhumanism and Rationality.
The overlap of Futurism and Transhumanism is too obvious to belabor.
This is a lot of words reiterating basically the idea of Eliezer’s Empirical Cluster in Personspace, which he defines extensively as “atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc”. But, I think a lot of our now-prominent diaspora bloggers don’t fit into that personspace very well, as he defined it.
So, if I had to really drill down to the crux of it, I would say the rallying flag looks something like a default disposition towards taking ideas seriously, plus an assumption that radical self-change is possible. Everything else just falls out of these psychological stances.
I think you’re describing the common interests of the tribe, but that’s a different thing than the rallying flag.
Since we’re operating within Yvain’s framework, we’ll use his definition which is
The rallying flag is the explicit purpose of the tribe. It’s usually a belief, event, or activity that get people with that specific pre-existing difference together and excited.
HPMoR, for example, is (was?) a rallying flag for a subset of the LW tribe. But I don’t think a “default disposition” would qualify (Yvain would call it a stage 1 “pre-existing difference”) and an “assumption” is very doubtful as well.
What do you think LW’s rallying flag is?
The combination of:
transhumanism (“Friendly AI”)
rationality (“overcoming biases”)
improving oneself (“becoming stronger”)
improving the world (“Friendly AI” + “raising the sanity waterline”)
The individual elements are already out there—various kinds of transhumanists and futurists; psychologists such as Kahneman; the whole self-improvement industry; and thousands of political or religious movements. But the problem is that self-improvement and world-changing movements are typically full of insanity. And dreaming about transhuman future is nice, but it’s not obvious how people like me would contribute.
So, speaking for myself, what I hear in the Sequences is:
“You can become stronger, find like-minded friends, improve the world, and ultimately bring the sci-fi future… without having to sacrifice your own sanity. Actually, being smart and sane will be helpful.”
(And the dissolution happens when people seem no longer interested in improving themselves, improving the world and bringing the sci-fi future; only in having a place to procrastinate by sharing news articles and nitpicking everything. Something like Mensa online.)
Candidates:
Loosely, “transhumanism”, or, more basically, a belief that “radical” self-improvement or self-alteration is possible and desirable. It is no coincidence that people who find the idea of uploading their minds to computers appealing might also enjoy “life hacks.” Both ideas involve self-modification. The very idea of “upgrading your rationality” presumes that a level of self-modification is possible, to an extent that a normal person might deny.
Interest in futurism, often in one utopian flavor or another. The concept of FAI turns bullshitting about the Singularity into something that feels like an actionable engineering problem rather than a purely sophistic exercise.
You could possibly draw a Venn diagram of three circles, labeled Futurism, Rationality, and Transhumanism. The three concepts overlap conceptually by default. The sweet spot where all three overlap contains the topics of FAI, Fun Theory, AI Risk in general.
Our propensity to subscribe to weird political theories can be viewed as the overlap between Futurism and Rationality, i.e. applying logical and dispassionate thinking toward social structures.
Our belief that it’s even possible (and desirable) to “raise the sanity waterline” lies at the intersection of Transhumanism and Rationality.
The overlap of Futurism and Transhumanism is too obvious to belabor.
This is a lot of words reiterating basically the idea of Eliezer’s Empirical Cluster in Personspace, which he defines extensively as “atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc”. But, I think a lot of our now-prominent diaspora bloggers don’t fit into that personspace very well, as he defined it.
So, if I had to really drill down to the crux of it, I would say the rallying flag looks something like a default disposition towards taking ideas seriously, plus an assumption that radical self-change is possible. Everything else just falls out of these psychological stances.
I think you’re describing the common interests of the tribe, but that’s a different thing than the rallying flag.
Since we’re operating within Yvain’s framework, we’ll use his definition which is
HPMoR, for example, is (was?) a rallying flag for a subset of the LW tribe. But I don’t think a “default disposition” would qualify (Yvain would call it a stage 1 “pre-existing difference”) and an “assumption” is very doubtful as well.