I’ve never heard of metatroll either, but I won’t hold that against them :)
I believe you mean to say that you don’t like the tone of the original post in that it feels “accusatory”. I realise I failed to make it clear that I picked up on that when I made my response. I agree it comes across as accusatory. Especially as 3 levels leading to “I have never heard of you”. I am glad you said:
I won’t hold that against them :)
Keeping a welcoming community is very important to us.
I was white-knighting for the weird suns. See, I haven’t read Chapman. I just assumed you were here to steal their work by annexing it to your own philosophical brand of “meta-rationality”. I didn’t know that was one of his buzzwords.
Of course you have a right to be a Chapmanite; even that version of postrational subculture is surely better than the subrational postculture that surrounds it. But do not imagine for a moment that his is the only way to go meta.
the tone of my response was meant to defuse whatever tension was the cause of the accusatory tone in the first place.
Absolutely, I see that in the sense of the idea of, “leave things unsaid” (from that specific culture). if I were in the position of metatroll, I would take it as a perceived smugness (leading downhill into more smugness in response), not in the lighthearted “don’t talk about the elephant in the room” kind of way that you intended it.
Metatroll started it, you played with it instead of either letting it go or responding to it directly. I contributed by ignoring it. Do continue to hang around and share your ideas with us.
FWLIW, I took “I’ve never heard of metatroll either, but I won’t hold that against them :)” as intended to have a net-deëscalatory effect, even if it didn’t seem to be entirely subtext-free. (and this combination of attributes is not something I have a problem with)
Well, I “hang around in rationalist corners of the world” and I’ve never heard about it until it popped up in some comments here in LW this week. I don’t think it’s quite as universal as you think, and a brief, even 3 sentences explanation would have helped the post a lot.
My observation is that post-rationalists are much more interested in culture and community type stuff than the Rationalist community. This is not to say that Rationalist community doesn’t value culture and community, and in fact it gets discussed quite frequently (e.g. the solstice has been established explicitly to create a sense of community and “the divine”). The difference is that while Rationalists are most interested in biases, epistemology, and decision theory, post-rationalists are most interested in culture and community and related things. Mainstream Rationalists are usually/sometimes, loosely tied to Rationalism as a culture (otherwise the solstice wouldn’t exist), but mostly they define their interests as whatever wins and the intellectual search for right action. Post-rationalists, on the other hand, view the world through a lens where culture and community is highly important, which is why they think that Rationalists even represents a thing you can be “post” to, while many Rationalists don’t see it that way.
I don’t think that Rationalists are wrong when they write about culture, they usually have well-argued points that point to true things. The main difference is that post-rationalists have a sort of richness to their descriptions and understanding that is lacking in Rationalist accounts. When Rationalists write about culture it has an ineffable dryness that doesn’t ring true to my experience, while post-rationalists don’t. The main exception to this is Scott Alexander, but in most other cases I think the rule holds.
Ultimately, I don’t think there is much difference between the quality of insights offered by Rationalists and post-rationalists, and I don’t think one is more right than the other. When reading the debates between Chapman and various Rationalist writers, the differences seem fairly minute. But there is a big difference in the sorts of things they write about. For myself, I find both views interesting and so far have not noticed any significant actual conflict in models.
Edit: Another related difference is that post-rationality authors are more willing to go out on a limb with ideas. Most of their ideas, dealing in softer areas, are necessarily less certain. It’s not even clear that certainty can be established with some of their ideas, or whether they are just helpful models for thinking about the world. In the Rationalsphere people prefer arguments that are clearly backed up at every stage, ideally with peer reviewed evidence. This severely limits the kind of arguments you can make, since there are many things that we don’t have research on and will plausibly never have research on.
The Meaningness book’s section on Meaningness and Time is all about culture viewed through Chapman’s lens. Ribbonfarm has tons of articles about culture, most of which I haven’t read. I haven’t been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.
Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider’s perspective, emphasizing the value of “local” or “small-set” ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don’t identify as a group to the extent that they want to have “post-rationalist rituals.” David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.
Honestly, I don’t think there is much in the way of fundamental philosophical differences. I think it’s more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it’s just my impression.
Honestly, I don’t think there is much in the way of fundamental philosophical differences.
I suspect that if there would be a specific definition of post-rationalist philosophy, I would probably agree with most of it. I suspect that most of it could even be supported by the Sequences. When I read explanations of how post-rationalists differs from rationalists, the part describing rationalists always feels like a strawman. (Rationalists believe that emotions are unimportant. Post-rationalists are smarter than that. Rationalists believe that you should ignore your intuition. Post-rationalists are smarter than that. Rationalists believe System 1 is always bad, and System 2 is always good. Post-rationalists are smarter than that. Etc.) By such definitions, I am mostly a post-rationalist… but more importantly, so is Eliezer, and so is CFAR… and the question is “So who the heck are these stupid ‘rationalists’ everyone talks about; where can I find them? Are they the same people Julia Galef called Straw Vulcans?”
A more charitable reading would be that while wannabe rationalists admit, in theory the importance of emotions / intuition / System 1, in practice they seem to ignore this. Even if CFAR teaches lessons specifically about this stuff, you wouldn’t know about it by reading Less Wrong (other than the articles specifically saying that CFAR teaches this), because there is something straw-Vulcanish about the LW culture. Therefore a new culture needs to be created, publicly embracing emotions / intuition / System 1. And the best way to keep the old culture away is to refuse using the same label.
When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don’t identify as a group to the extent that they want to have “post-rationalist rituals.”
Seems to me like what actually happened could be this:
Eliezer wrote a blog about the “art of rationality”. The blog mostly focuses on explaining some basic concepts of rationality (anticipated experience, belief in belief, mysterious answers, probability and updating...) and on some typical ways humans routinely go astray (affective spirals, tribalism...). Although many people complain that the Sequences are too long, Eliezer considers them merely a specialized introduction necessary (but maybe not sufficient) to avoid the typical mistakes smart humans so often make when they try to be rational. There is still a lot to be done, specifically “instrumental rationality, in general (...) training, teaching, verification, and becoming a proper experimental science based on that (...) developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...”.
This blog attracted a community which turned out to be… less impressive than a few of us expected. A few years later, we still don’t have an army of beisutsukai. (To be blunt, we barely have a functional website.) We don’t have a reliable system to “level up” wannabe rationalists to make them leaders or millionaires or whatever “winning” means for you. A few good things happened (mass media talk about AI risk, effective altruism is a thing), but it’s questionable how much of that should be attributed to LW, and how much to other sources. Most of what we do here is talking, and even the quality of talking seems to be going down recently.
The novice goes astray and says “The art failed me”; the master goes astray and says “I failed my art.”
Seems to me the divide between “rationalists” and “post-rationalists” is related to how to approach this disappointment. Rationalists face a difficult (emotionally) problem of explaining why they aren’t winning, why they haven’t raised the sanity waterline yet, why are they still mostly isolated individuals… and what exactly are they going to do about this. Also known as: “Where do you see yourself 5 years from now?”, but with using the outside view.
Post-rationalists solve this problem by abandoning the label, and associating all failures with the old label. I wonder what happens five years later; will “post-post-rationality” become a thing? -- I could already start writing some post-post-rationalist slogans: “Rationalists believe that emotions are unimportant. Post-rationalists believe that reason is unimportant. We, the smart post-post-rationalists, recognize that both reason and emotion have their important place in human life, and need to be used properly. Rationalists worship Bayes; post-rationalists worship Kegan. Post-post-rationalists recognize that no science can build on a single person, and that one needs to consider multiple points of view...”
I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don’t want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don’t know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.
Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don’t view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.
Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.
A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.
Dominic Cummings (the architect of the Brexit “Leave” campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn’t good for rationality depending on your political views, but don’t let it be said that he isn’t winning).
OpenAI was launched with $1B in funding from a Silicon Valley who’s who and they have been in dialogue with MIRI staff (and interestingly Stripe’s former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can’t say with certainty that this wouldn’t have happened without LessWrong, but personally I find it hard to believe that it didn’t make a huge impact on Eliezer’s influence within this field of thought.
Do we have an army of devout rationalists that are out there winning? No, it doesn’t seem so. But rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later. Personally, I didn’t expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.
Thank you for the optimistic words. However, when I look at historical examples, this still seems like a bad news in long term:
rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later
Consider Alfred Korzybski, the author of Science and Sanity, founder of General Semantics. He was an “x-rationalist” of his era, 80 years ago. He inspired many successful things; for example Cognitive-Behavioral Therapy can be traced to his ideas. So, we can attribute a lot of “wins” to him and to people inspired by him.
He also completely failed at him main goal, preventing WW2. Also, it doesn’t seem like humanity became more rational, which was his instrumental goal for achieving the former. (On the second thought, maybe humanity actually is more rational than back then, and maybe he even contributed to this significantly, but I don’t see it because it became the new normal.)
If today’s rationalist movement will follow the same path, the analogical outcome would be a few very successful startup owners, and then… an unfriendly AI kills us all, because everyone was too busy using rationality for their personal goals, and didn’t contribute to the basic research and “raising the rationality waterline”.
And in the Everett branch where humanity fails to develop a smarter-than-human AI, 80 years later the rationalist movement will be mostly forgotten; there will be some pathetic remains of CFAR trying to make people read “Rationality from AI to Zombies” but no one will really care, simply because the fact that they had existed for so long without having conquered the world will be an evidence against them.
I’d like to do better than this. I think I am progressing in my personal life, a few of those improvements are even measurable, but it is really slow and takes a lot of time. And I believe a long-term solution consists of rationalist groups, not isolated individuals. Making money individually is great, but to change humanity we need some social technology that can replicate rationalist groups. Something like a scout movement equivalent for LW meetups would be a nice beginning.
Don’t know why the discrepancy, but it seems to me that a great deal of postrationality is littered with historical examples.
I also share your skepticism of clear psychological progression, but would point out plenty of times that people diverge in some ways but converge in more meta ones, e.g. divergence to liberal or conservative but convergence in political acumen, or e.g. divergence to minimalism or luxury but convergence to environmental modification.
My experience suggests an analogy or continuum? As RationalWiki is to LW, so LW is to so-called post-rationalists.
RationalWiki is highly mission driven, centralized around orthodox summaries on various topics, and not really a conversation at all. Relatively speaking, RW is the most obsessed with purifying ambient culture while sticking to the rituals of peer review (or whatever NSF or NIH insists on for grant funding lately), and are the least confident in their ability to have original thoughts about hard things.
So-called post-rats are full of opinions and aren’t particularly interested cargo cult practices around citation and seriousness and wikis and so forth. If you can’t judge things yourself, you’re probably not in the audience. If weird surfaces scares some of the audience away, there are fewer worries about someone dumb being mislead by a joke. They tend to be professional ML people and/or have read a pile of history books and love a good mind quake. Books and conversation and trolling are essential. Also maybe instead of aspiring to be better at reasoning they aspire to be more normal?
Modern LW is sort of in the middle. It has a wiki, wants to save the world, and prefers posts to be like a journal article with citations… but it also thinks that solid reasoning is more essential than “science” and fan fiction isn’t such a bad way to get out a message without dealing with the downsides of respectability ;-)
Another core text is Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed. The concept it defines is “legibility”, summarized in this Ribbonfarm post by Venkat. A legible system is standardized, categorized, manipulable from the top down: the numbered grid of New York City’s streets. An illegible system tends to be evolved rather than designed, and makes sense to its users but is hard to explain: the paved cow paths of Boston. The book is about the dangers of demolishing old illegible systems to install new legible ones—a depressingly common historical failure mode. The idea is close to the heart of postrationality, which is more interested in studying seemingly irrational things to see how they tick than it is in optimizing them.
My favorite introductory post is Sarah Perry’s The Systems of the World, which categorizes some of her other posts under the umbrella of being able to play flexibly with systems for thinking. Close connections to Chapman’s definition of metarationality, here.
(Looks like you accidentally drafted this post and then undrafted it again, which resets the post-date and brings it back to the frontpage. I set it back to its initial posting-date.)
I registered r/MetaRational, r/MetaRationality and r/Meaningness on reddit in case there’s any interest in building a reddit community around these topics. I guess it might interfere a bit with the revival of LW? I don’t know if there’s any value added by me registering these subs, let me know what you think.
(Also, obviously I’d sign over r/Meaningness to David Chapman if he displays interest)
Why is there even any need for these ephemeral “beyond-isms”, “above-isms”, “meta-isms”, etc?
Sure, not all people think/act all the time 100% rationally (not to mention groups/societies/nations) but that should not be a reason to take this as law of physics, baseline, axiom, and build a “cathedral of thoughts” upon it (or any other theology). Don’t understand or cannot explain something? Same thing—not a reason to pick randomly some “explanation” (=bias, baseline) and then mask it by logically built theories.
Naively, one would say: since we began to discover logic, math and rational (scientific) approach in general thousands of years ago, there’s no need to waste our precious time on any metacrap.
Well, there’s only one obvious problem—look who is doing it: not a rational engine but a fleshy animal with wetware processor. Largely influenced even by its reptilian brain or amygdala, with reward function that includes stuff like good/bad, feelings, FFF reflexes, etc.
Plus the treachery of intuitive and subconscious thinking—even if this “brackground” brain processing is 100% “rational”, logical and based on our knowledge, it disrupts the main “visible” rational line of thought simply because it “just appears”, somehow pops up… and to be rigorous, one has to in principle check or even ” reverse engineer” all the bits and pieces to really “see” whether they are “correct” (whatever it may mean).
The point?
As we all know, it’s damn hard to be rational, even in restricted and well defined areas, not talking about “real life”… as all the biases and fallacies remind us.
Often it’s next to impossible to even simply realize what just “popped up” from the background (often heavily biased—analogies, similarities, etc.) and what’s “truly rational” (rigorous/logical/unbiased) in your main line of thought. And there’s the whole quicksand field of axioms, (often unmentioned) assumptions, selections, restrictions and other baseline shifts/picks and biases.
So, did these meta-ists really HAVE TO go “beyond” rationality? Because they “found limits”? Or somehow “exhausted possibilities” of this method?
Since, you know, mentioning culture, community, society, etc. does not really sound like the “killer application” for me: these subjects are (from the rationalistic point of view) to a large extent exactly about biases, fallacies, baselines, axioms, etc—certainly much more than about logic or reasoning.
Some of the weird suns are into postrationality, as I would define it, but most of them aren’t. (That, or, they keep their affiliation with postrationality secret, which is plausible enough given their commitment to opsec.)
I would add The Timeless Way of Building to the list of primary texts, Chistopher Alexander has been a huge influence for many of us.
You could add Sapiens by Yuval Hariri, which eloquently defends the importance of shared myths and convenient fictions. .. that’s old news to some, but not to all.
I’ve never heard of Keganism, I’ve never heard of your “primary texts of metarationality”, and I’ve never heard of you.
This is why we need downvotes.
Jacob recently renamed from a pseudonym. He mentioned it in a welcome thread (I think or an open thread)
Robert Kegan’s work in the stages of developmental psychology is definitely a concept that hangs around and is debated. (good summary here https://meaningness.wordpress.com/2015/10/12/developing-ethical-social-and-cognitive-competence/)
I know of these blogs but never make much sense of them myself. I figured it was personal preference of writing styles.
I believe you mean to say that you don’t like the tone of the original post in that it feels “accusatory”. I realise I failed to make it clear that I picked up on that when I made my response. I agree it comes across as accusatory. Especially as 3 levels leading to “I have never heard of you”. I am glad you said:
Keeping a welcoming community is very important to us.
I was white-knighting for the weird suns. See, I haven’t read Chapman. I just assumed you were here to steal their work by annexing it to your own philosophical brand of “meta-rationality”. I didn’t know that was one of his buzzwords.
Of course you have a right to be a Chapmanite; even that version of postrational subculture is surely better than the subrational postculture that surrounds it. But do not imagine for a moment that his is the only way to go meta.
-
Absolutely, I see that in the sense of the idea of, “leave things unsaid” (from that specific culture). if I were in the position of metatroll, I would take it as a perceived smugness (leading downhill into more smugness in response), not in the lighthearted “don’t talk about the elephant in the room” kind of way that you intended it.
Metatroll started it, you played with it instead of either letting it go or responding to it directly. I contributed by ignoring it. Do continue to hang around and share your ideas with us.
FWLIW, I took “I’ve never heard of metatroll either, but I won’t hold that against them :)” as intended to have a net-deëscalatory effect, even if it didn’t seem to be entirely subtext-free. (and this combination of attributes is not something I have a problem with)
I have nothing to add, it just delights me to see that someone out there is still using the diaeresis.
I left this post without any understanding of what “postrationality” or “metarationality” is. That’s a failure I think.
Well, I “hang around in rationalist corners of the world” and I’ve never heard about it until it popped up in some comments here in LW this week. I don’t think it’s quite as universal as you think, and a brief, even 3 sentences explanation would have helped the post a lot.
My observation is that post-rationalists are much more interested in culture and community type stuff than the Rationalist community. This is not to say that Rationalist community doesn’t value culture and community, and in fact it gets discussed quite frequently (e.g. the solstice has been established explicitly to create a sense of community and “the divine”). The difference is that while Rationalists are most interested in biases, epistemology, and decision theory, post-rationalists are most interested in culture and community and related things. Mainstream Rationalists are usually/sometimes, loosely tied to Rationalism as a culture (otherwise the solstice wouldn’t exist), but mostly they define their interests as whatever wins and the intellectual search for right action. Post-rationalists, on the other hand, view the world through a lens where culture and community is highly important, which is why they think that Rationalists even represents a thing you can be “post” to, while many Rationalists don’t see it that way.
I don’t think that Rationalists are wrong when they write about culture, they usually have well-argued points that point to true things. The main difference is that post-rationalists have a sort of richness to their descriptions and understanding that is lacking in Rationalist accounts. When Rationalists write about culture it has an ineffable dryness that doesn’t ring true to my experience, while post-rationalists don’t. The main exception to this is Scott Alexander, but in most other cases I think the rule holds.
Ultimately, I don’t think there is much difference between the quality of insights offered by Rationalists and post-rationalists, and I don’t think one is more right than the other. When reading the debates between Chapman and various Rationalist writers, the differences seem fairly minute. But there is a big difference in the sorts of things they write about. For myself, I find both views interesting and so far have not noticed any significant actual conflict in models.
Edit: Another related difference is that post-rationality authors are more willing to go out on a limb with ideas. Most of their ideas, dealing in softer areas, are necessarily less certain. It’s not even clear that certainty can be established with some of their ideas, or whether they are just helpful models for thinking about the world. In the Rationalsphere people prefer arguments that are clearly backed up at every stage, ideally with peer reviewed evidence. This severely limits the kind of arguments you can make, since there are many things that we don’t have research on and will plausibly never have research on.
Melting Asphalt has this very intriguing analysis of personhood.
The Meaningness book’s section on Meaningness and Time is all about culture viewed through Chapman’s lens. Ribbonfarm has tons of articles about culture, most of which I haven’t read. I haven’t been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.
Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider’s perspective, emphasizing the value of “local” or “small-set” ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don’t identify as a group to the extent that they want to have “post-rationalist rituals.” David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.
Honestly, I don’t think there is much in the way of fundamental philosophical differences. I think it’s more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it’s just my impression.
I suspect that if there would be a specific definition of post-rationalist philosophy, I would probably agree with most of it. I suspect that most of it could even be supported by the Sequences. When I read explanations of how post-rationalists differs from rationalists, the part describing rationalists always feels like a strawman. (Rationalists believe that emotions are unimportant. Post-rationalists are smarter than that. Rationalists believe that you should ignore your intuition. Post-rationalists are smarter than that. Rationalists believe System 1 is always bad, and System 2 is always good. Post-rationalists are smarter than that. Etc.) By such definitions, I am mostly a post-rationalist… but more importantly, so is Eliezer, and so is CFAR… and the question is “So who the heck are these stupid ‘rationalists’ everyone talks about; where can I find them? Are they the same people Julia Galef called Straw Vulcans?”
A more charitable reading would be that while wannabe rationalists admit, in theory the importance of emotions / intuition / System 1, in practice they seem to ignore this. Even if CFAR teaches lessons specifically about this stuff, you wouldn’t know about it by reading Less Wrong (other than the articles specifically saying that CFAR teaches this), because there is something straw-Vulcanish about the LW culture. Therefore a new culture needs to be created, publicly embracing emotions / intuition / System 1. And the best way to keep the old culture away is to refuse using the same label.
Seems to me like what actually happened could be this:
Eliezer wrote a blog about the “art of rationality”. The blog mostly focuses on explaining some basic concepts of rationality (anticipated experience, belief in belief, mysterious answers, probability and updating...) and on some typical ways humans routinely go astray (affective spirals, tribalism...). Although many people complain that the Sequences are too long, Eliezer considers them merely a specialized introduction necessary (but maybe not sufficient) to avoid the typical mistakes smart humans so often make when they try to be rational. There is still a lot to be done, specifically “instrumental rationality, in general (...) training, teaching, verification, and becoming a proper experimental science based on that (...) developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...”.
This blog attracted a community which turned out to be… less impressive than a few of us expected. A few years later, we still don’t have an army of beisutsukai. (To be blunt, we barely have a functional website.) We don’t have a reliable system to “level up” wannabe rationalists to make them leaders or millionaires or whatever “winning” means for you. A few good things happened (mass media talk about AI risk, effective altruism is a thing), but it’s questionable how much of that should be attributed to LW, and how much to other sources. Most of what we do here is talking, and even the quality of talking seems to be going down recently.
Seems to me the divide between “rationalists” and “post-rationalists” is related to how to approach this disappointment. Rationalists face a difficult (emotionally) problem of explaining why they aren’t winning, why they haven’t raised the sanity waterline yet, why are they still mostly isolated individuals… and what exactly are they going to do about this. Also known as: “Where do you see yourself 5 years from now?”, but with using the outside view.
Post-rationalists solve this problem by abandoning the label, and associating all failures with the old label. I wonder what happens five years later; will “post-post-rationality” become a thing? -- I could already start writing some post-post-rationalist slogans: “Rationalists believe that emotions are unimportant. Post-rationalists believe that reason is unimportant. We, the smart post-post-rationalists, recognize that both reason and emotion have their important place in human life, and need to be used properly. Rationalists worship Bayes; post-rationalists worship Kegan. Post-post-rationalists recognize that no science can build on a single person, and that one needs to consider multiple points of view...”
I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don’t want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don’t know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.
Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don’t view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.
Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.
A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.
Dominic Cummings (the architect of the Brexit “Leave” campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn’t good for rationality depending on your political views, but don’t let it be said that he isn’t winning).
OpenAI was launched with $1B in funding from a Silicon Valley who’s who and they have been in dialogue with MIRI staff (and interestingly Stripe’s former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can’t say with certainty that this wouldn’t have happened without LessWrong, but personally I find it hard to believe that it didn’t make a huge impact on Eliezer’s influence within this field of thought.
Do we have an army of devout rationalists that are out there winning? No, it doesn’t seem so. But rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later. Personally, I didn’t expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.
Thank you for the optimistic words. However, when I look at historical examples, this still seems like a bad news in long term:
Consider Alfred Korzybski, the author of Science and Sanity, founder of General Semantics. He was an “x-rationalist” of his era, 80 years ago. He inspired many successful things; for example Cognitive-Behavioral Therapy can be traced to his ideas. So, we can attribute a lot of “wins” to him and to people inspired by him.
He also completely failed at him main goal, preventing WW2. Also, it doesn’t seem like humanity became more rational, which was his instrumental goal for achieving the former. (On the second thought, maybe humanity actually is more rational than back then, and maybe he even contributed to this significantly, but I don’t see it because it became the new normal.)
If today’s rationalist movement will follow the same path, the analogical outcome would be a few very successful startup owners, and then… an unfriendly AI kills us all, because everyone was too busy using rationality for their personal goals, and didn’t contribute to the basic research and “raising the rationality waterline”.
And in the Everett branch where humanity fails to develop a smarter-than-human AI, 80 years later the rationalist movement will be mostly forgotten; there will be some pathetic remains of CFAR trying to make people read “Rationality from AI to Zombies” but no one will really care, simply because the fact that they had existed for so long without having conquered the world will be an evidence against them.
I’d like to do better than this. I think I am progressing in my personal life, a few of those improvements are even measurable, but it is really slow and takes a lot of time. And I believe a long-term solution consists of rationalist groups, not isolated individuals. Making money individually is great, but to change humanity we need some social technology that can replicate rationalist groups. Something like a scout movement equivalent for LW meetups would be a nice beginning.
Deleted
Deleted
Don’t know why the discrepancy, but it seems to me that a great deal of postrationality is littered with historical examples.
I also share your skepticism of clear psychological progression, but would point out plenty of times that people diverge in some ways but converge in more meta ones, e.g. divergence to liberal or conservative but convergence in political acumen, or e.g. divergence to minimalism or luxury but convergence to environmental modification.
My experience suggests an analogy or continuum? As RationalWiki is to LW, so LW is to so-called post-rationalists.
RationalWiki is highly mission driven, centralized around orthodox summaries on various topics, and not really a conversation at all. Relatively speaking, RW is the most obsessed with purifying ambient culture while sticking to the rituals of peer review (or whatever NSF or NIH insists on for grant funding lately), and are the least confident in their ability to have original thoughts about hard things.
So-called post-rats are full of opinions and aren’t particularly interested cargo cult practices around citation and seriousness and wikis and so forth. If you can’t judge things yourself, you’re probably not in the audience. If weird surfaces scares some of the audience away, there are fewer worries about someone dumb being mislead by a joke. They tend to be professional ML people and/or have read a pile of history books and love a good mind quake. Books and conversation and trolling are essential. Also maybe instead of aspiring to be better at reasoning they aspire to be more normal?
Modern LW is sort of in the middle. It has a wiki, wants to save the world, and prefers posts to be like a journal article with citations… but it also thinks that solid reasoning is more essential than “science” and fan fiction isn’t such a bad way to get out a message without dealing with the downsides of respectability ;-)
Another core text is Seeing like a State: How Certain Schemes to Improve the Human Condition Have Failed. The concept it defines is “legibility”, summarized in this Ribbonfarm post by Venkat. A legible system is standardized, categorized, manipulable from the top down: the numbered grid of New York City’s streets. An illegible system tends to be evolved rather than designed, and makes sense to its users but is hard to explain: the paved cow paths of Boston. The book is about the dangers of demolishing old illegible systems to install new legible ones—a depressingly common historical failure mode. The idea is close to the heart of postrationality, which is more interested in studying seemingly irrational things to see how they tick than it is in optimizing them.
My favorite introductory post is Sarah Perry’s The Systems of the World, which categorizes some of her other posts under the umbrella of being able to play flexibly with systems for thinking. Close connections to Chapman’s definition of metarationality, here.
(Looks like you accidentally drafted this post and then undrafted it again, which resets the post-date and brings it back to the frontpage. I set it back to its initial posting-date.)
I registered r/MetaRational, r/MetaRationality and r/Meaningness on reddit in case there’s any interest in building a reddit community around these topics. I guess it might interfere a bit with the revival of LW? I don’t know if there’s any value added by me registering these subs, let me know what you think.
(Also, obviously I’d sign over r/Meaningness to David Chapman if he displays interest)
Why is there even any need for these ephemeral “beyond-isms”, “above-isms”, “meta-isms”, etc?
Sure, not all people think/act all the time 100% rationally (not to mention groups/societies/nations) but that should not be a reason to take this as law of physics, baseline, axiom, and build a “cathedral of thoughts” upon it (or any other theology). Don’t understand or cannot explain something? Same thing—not a reason to pick randomly some “explanation” (=bias, baseline) and then mask it by logically built theories.
Naively, one would say: since we began to discover logic, math and rational (scientific) approach in general thousands of years ago, there’s no need to waste our precious time on any metacrap.
Well, there’s only one obvious problem—look who is doing it: not a rational engine but a fleshy animal with wetware processor. Largely influenced even by its reptilian brain or amygdala, with reward function that includes stuff like good/bad, feelings, FFF reflexes, etc.
Plus the treachery of intuitive and subconscious thinking—even if this “brackground” brain processing is 100% “rational”, logical and based on our knowledge, it disrupts the main “visible” rational line of thought simply because it “just appears”, somehow pops up… and to be rigorous, one has to in principle check or even ” reverse engineer” all the bits and pieces to really “see” whether they are “correct” (whatever it may mean).
The point?
As we all know, it’s damn hard to be rational, even in restricted and well defined areas, not talking about “real life”… as all the biases and fallacies remind us.
Often it’s next to impossible to even simply realize what just “popped up” from the background (often heavily biased—analogies, similarities, etc.) and what’s “truly rational” (rigorous/logical/unbiased) in your main line of thought. And there’s the whole quicksand field of axioms, (often unmentioned) assumptions, selections, restrictions and other baseline shifts/picks and biases.
So, did these meta-ists really HAVE TO go “beyond” rationality? Because they “found limits”? Or somehow “exhausted possibilities” of this method?
Since, you know, mentioning culture, community, society, etc. does not really sound like the “killer application” for me: these subjects are (from the rationalistic point of view) to a large extent exactly about biases, fallacies, baselines, axioms, etc—certainly much more than about logic or reasoning.
Some of the weird suns are into postrationality, as I would define it, but most of them aren’t. (That, or, they keep their affiliation with postrationality secret, which is plausible enough given their commitment to opsec.)
I would add The Timeless Way of Building to the list of primary texts, Chistopher Alexander has been a huge influence for many of us.
Weird Sun Twitter also has a blog, which you may want to include.
You could add Sapiens by Yuval Hariri, which eloquently defends the importance of shared myths and convenient fictions. .. that’s old news to some, but not to all.