The Meaningness book’s section on Meaningness and Time is all about culture viewed through Chapman’s lens. Ribbonfarm has tons of articles about culture, most of which I haven’t read. I haven’t been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.
Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider’s perspective, emphasizing the value of “local” or “small-set” ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don’t identify as a group to the extent that they want to have “post-rationalist rituals.” David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.
Honestly, I don’t think there is much in the way of fundamental philosophical differences. I think it’s more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it’s just my impression.
Honestly, I don’t think there is much in the way of fundamental philosophical differences.
I suspect that if there would be a specific definition of post-rationalist philosophy, I would probably agree with most of it. I suspect that most of it could even be supported by the Sequences. When I read explanations of how post-rationalists differs from rationalists, the part describing rationalists always feels like a strawman. (Rationalists believe that emotions are unimportant. Post-rationalists are smarter than that. Rationalists believe that you should ignore your intuition. Post-rationalists are smarter than that. Rationalists believe System 1 is always bad, and System 2 is always good. Post-rationalists are smarter than that. Etc.) By such definitions, I am mostly a post-rationalist… but more importantly, so is Eliezer, and so is CFAR… and the question is “So who the heck are these stupid ‘rationalists’ everyone talks about; where can I find them? Are they the same people Julia Galef called Straw Vulcans?”
A more charitable reading would be that while wannabe rationalists admit, in theory the importance of emotions / intuition / System 1, in practice they seem to ignore this. Even if CFAR teaches lessons specifically about this stuff, you wouldn’t know about it by reading Less Wrong (other than the articles specifically saying that CFAR teaches this), because there is something straw-Vulcanish about the LW culture. Therefore a new culture needs to be created, publicly embracing emotions / intuition / System 1. And the best way to keep the old culture away is to refuse using the same label.
When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don’t identify as a group to the extent that they want to have “post-rationalist rituals.”
Seems to me like what actually happened could be this:
Eliezer wrote a blog about the “art of rationality”. The blog mostly focuses on explaining some basic concepts of rationality (anticipated experience, belief in belief, mysterious answers, probability and updating...) and on some typical ways humans routinely go astray (affective spirals, tribalism...). Although many people complain that the Sequences are too long, Eliezer considers them merely a specialized introduction necessary (but maybe not sufficient) to avoid the typical mistakes smart humans so often make when they try to be rational. There is still a lot to be done, specifically “instrumental rationality, in general (...) training, teaching, verification, and becoming a proper experimental science based on that (...) developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...”.
This blog attracted a community which turned out to be… less impressive than a few of us expected. A few years later, we still don’t have an army of beisutsukai. (To be blunt, we barely have a functional website.) We don’t have a reliable system to “level up” wannabe rationalists to make them leaders or millionaires or whatever “winning” means for you. A few good things happened (mass media talk about AI risk, effective altruism is a thing), but it’s questionable how much of that should be attributed to LW, and how much to other sources. Most of what we do here is talking, and even the quality of talking seems to be going down recently.
The novice goes astray and says “The art failed me”; the master goes astray and says “I failed my art.”
Seems to me the divide between “rationalists” and “post-rationalists” is related to how to approach this disappointment. Rationalists face a difficult (emotionally) problem of explaining why they aren’t winning, why they haven’t raised the sanity waterline yet, why are they still mostly isolated individuals… and what exactly are they going to do about this. Also known as: “Where do you see yourself 5 years from now?”, but with using the outside view.
Post-rationalists solve this problem by abandoning the label, and associating all failures with the old label. I wonder what happens five years later; will “post-post-rationality” become a thing? -- I could already start writing some post-post-rationalist slogans: “Rationalists believe that emotions are unimportant. Post-rationalists believe that reason is unimportant. We, the smart post-post-rationalists, recognize that both reason and emotion have their important place in human life, and need to be used properly. Rationalists worship Bayes; post-rationalists worship Kegan. Post-post-rationalists recognize that no science can build on a single person, and that one needs to consider multiple points of view...”
I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don’t want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don’t know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.
Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don’t view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.
Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.
A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.
Dominic Cummings (the architect of the Brexit “Leave” campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn’t good for rationality depending on your political views, but don’t let it be said that he isn’t winning).
OpenAI was launched with $1B in funding from a Silicon Valley who’s who and they have been in dialogue with MIRI staff (and interestingly Stripe’s former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can’t say with certainty that this wouldn’t have happened without LessWrong, but personally I find it hard to believe that it didn’t make a huge impact on Eliezer’s influence within this field of thought.
Do we have an army of devout rationalists that are out there winning? No, it doesn’t seem so. But rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later. Personally, I didn’t expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.
Thank you for the optimistic words. However, when I look at historical examples, this still seems like a bad news in long term:
rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later
Consider Alfred Korzybski, the author of Science and Sanity, founder of General Semantics. He was an “x-rationalist” of his era, 80 years ago. He inspired many successful things; for example Cognitive-Behavioral Therapy can be traced to his ideas. So, we can attribute a lot of “wins” to him and to people inspired by him.
He also completely failed at him main goal, preventing WW2. Also, it doesn’t seem like humanity became more rational, which was his instrumental goal for achieving the former. (On the second thought, maybe humanity actually is more rational than back then, and maybe he even contributed to this significantly, but I don’t see it because it became the new normal.)
If today’s rationalist movement will follow the same path, the analogical outcome would be a few very successful startup owners, and then… an unfriendly AI kills us all, because everyone was too busy using rationality for their personal goals, and didn’t contribute to the basic research and “raising the rationality waterline”.
And in the Everett branch where humanity fails to develop a smarter-than-human AI, 80 years later the rationalist movement will be mostly forgotten; there will be some pathetic remains of CFAR trying to make people read “Rationality from AI to Zombies” but no one will really care, simply because the fact that they had existed for so long without having conquered the world will be an evidence against them.
I’d like to do better than this. I think I am progressing in my personal life, a few of those improvements are even measurable, but it is really slow and takes a lot of time. And I believe a long-term solution consists of rationalist groups, not isolated individuals. Making money individually is great, but to change humanity we need some social technology that can replicate rationalist groups. Something like a scout movement equivalent for LW meetups would be a nice beginning.
Melting Asphalt has this very intriguing analysis of personhood.
The Meaningness book’s section on Meaningness and Time is all about culture viewed through Chapman’s lens. Ribbonfarm has tons of articles about culture, most of which I haven’t read. I haven’t been following post-rationality for very long. Even on the front page now there is this which is interesting and typical of the thought.
Post-rationalists write about rituals quite a bit I think (e.g. here). But they write about it from an outsider’s perspective, emphasizing the value of “local” or “small-set” ritual to everyone as part of the human experience (whether they be traditional or new rituals). When Rationalists write about ritual my impression is that they are writing about ritual for Rationalists as part of the project of establishing or growing a Rationalist community to raise the sanity waterline. Post-rationalists don’t identify as a group to the extent that they want to have “post-rationalist rituals.” David Chapman is a very active Buddhist, for example, so he participates in rituals (this link from his Buddhism blog) related to that community, and presumably the authors at ribbonfarm observe rituals that are relevant within their local communities.
Honestly, I don’t think there is much in the way of fundamental philosophical differences. I think it’s more like Rationalists and post-Rationalists are drawn from the same pool of people, but some are more interested in model trains and some are more interested in D&D. It would be hard for me to make this argument rigorous though, it’s just my impression.
I suspect that if there would be a specific definition of post-rationalist philosophy, I would probably agree with most of it. I suspect that most of it could even be supported by the Sequences. When I read explanations of how post-rationalists differs from rationalists, the part describing rationalists always feels like a strawman. (Rationalists believe that emotions are unimportant. Post-rationalists are smarter than that. Rationalists believe that you should ignore your intuition. Post-rationalists are smarter than that. Rationalists believe System 1 is always bad, and System 2 is always good. Post-rationalists are smarter than that. Etc.) By such definitions, I am mostly a post-rationalist… but more importantly, so is Eliezer, and so is CFAR… and the question is “So who the heck are these stupid ‘rationalists’ everyone talks about; where can I find them? Are they the same people Julia Galef called Straw Vulcans?”
A more charitable reading would be that while wannabe rationalists admit, in theory the importance of emotions / intuition / System 1, in practice they seem to ignore this. Even if CFAR teaches lessons specifically about this stuff, you wouldn’t know about it by reading Less Wrong (other than the articles specifically saying that CFAR teaches this), because there is something straw-Vulcanish about the LW culture. Therefore a new culture needs to be created, publicly embracing emotions / intuition / System 1. And the best way to keep the old culture away is to refuse using the same label.
Seems to me like what actually happened could be this:
Eliezer wrote a blog about the “art of rationality”. The blog mostly focuses on explaining some basic concepts of rationality (anticipated experience, belief in belief, mysterious answers, probability and updating...) and on some typical ways humans routinely go astray (affective spirals, tribalism...). Although many people complain that the Sequences are too long, Eliezer considers them merely a specialized introduction necessary (but maybe not sufficient) to avoid the typical mistakes smart humans so often make when they try to be rational. There is still a lot to be done, specifically “instrumental rationality, in general (...) training, teaching, verification, and becoming a proper experimental science based on that (...) developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...”.
This blog attracted a community which turned out to be… less impressive than a few of us expected. A few years later, we still don’t have an army of beisutsukai. (To be blunt, we barely have a functional website.) We don’t have a reliable system to “level up” wannabe rationalists to make them leaders or millionaires or whatever “winning” means for you. A few good things happened (mass media talk about AI risk, effective altruism is a thing), but it’s questionable how much of that should be attributed to LW, and how much to other sources. Most of what we do here is talking, and even the quality of talking seems to be going down recently.
Seems to me the divide between “rationalists” and “post-rationalists” is related to how to approach this disappointment. Rationalists face a difficult (emotionally) problem of explaining why they aren’t winning, why they haven’t raised the sanity waterline yet, why are they still mostly isolated individuals… and what exactly are they going to do about this. Also known as: “Where do you see yourself 5 years from now?”, but with using the outside view.
Post-rationalists solve this problem by abandoning the label, and associating all failures with the old label. I wonder what happens five years later; will “post-post-rationality” become a thing? -- I could already start writing some post-post-rationalist slogans: “Rationalists believe that emotions are unimportant. Post-rationalists believe that reason is unimportant. We, the smart post-post-rationalists, recognize that both reason and emotion have their important place in human life, and need to be used properly. Rationalists worship Bayes; post-rationalists worship Kegan. Post-post-rationalists recognize that no science can build on a single person, and that one needs to consider multiple points of view...”
I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don’t want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don’t know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.
Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don’t view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.
Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.
A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.
Dominic Cummings (the architect of the Brexit “Leave” campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn’t good for rationality depending on your political views, but don’t let it be said that he isn’t winning).
OpenAI was launched with $1B in funding from a Silicon Valley who’s who and they have been in dialogue with MIRI staff (and interestingly Stripe’s former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can’t say with certainty that this wouldn’t have happened without LessWrong, but personally I find it hard to believe that it didn’t make a huge impact on Eliezer’s influence within this field of thought.
Do we have an army of devout rationalists that are out there winning? No, it doesn’t seem so. But rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later. Personally, I didn’t expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.
Thank you for the optimistic words. However, when I look at historical examples, this still seems like a bad news in long term:
Consider Alfred Korzybski, the author of Science and Sanity, founder of General Semantics. He was an “x-rationalist” of his era, 80 years ago. He inspired many successful things; for example Cognitive-Behavioral Therapy can be traced to his ideas. So, we can attribute a lot of “wins” to him and to people inspired by him.
He also completely failed at him main goal, preventing WW2. Also, it doesn’t seem like humanity became more rational, which was his instrumental goal for achieving the former. (On the second thought, maybe humanity actually is more rational than back then, and maybe he even contributed to this significantly, but I don’t see it because it became the new normal.)
If today’s rationalist movement will follow the same path, the analogical outcome would be a few very successful startup owners, and then… an unfriendly AI kills us all, because everyone was too busy using rationality for their personal goals, and didn’t contribute to the basic research and “raising the rationality waterline”.
And in the Everett branch where humanity fails to develop a smarter-than-human AI, 80 years later the rationalist movement will be mostly forgotten; there will be some pathetic remains of CFAR trying to make people read “Rationality from AI to Zombies” but no one will really care, simply because the fact that they had existed for so long without having conquered the world will be an evidence against them.
I’d like to do better than this. I think I am progressing in my personal life, a few of those improvements are even measurable, but it is really slow and takes a lot of time. And I believe a long-term solution consists of rationalist groups, not isolated individuals. Making money individually is great, but to change humanity we need some social technology that can replicate rationalist groups. Something like a scout movement equivalent for LW meetups would be a nice beginning.