I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don’t want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don’t know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.
Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don’t view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.
Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.
A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.
Dominic Cummings (the architect of the Brexit “Leave” campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn’t good for rationality depending on your political views, but don’t let it be said that he isn’t winning).
OpenAI was launched with $1B in funding from a Silicon Valley who’s who and they have been in dialogue with MIRI staff (and interestingly Stripe’s former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can’t say with certainty that this wouldn’t have happened without LessWrong, but personally I find it hard to believe that it didn’t make a huge impact on Eliezer’s influence within this field of thought.
Do we have an army of devout rationalists that are out there winning? No, it doesn’t seem so. But rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later. Personally, I didn’t expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.
Thank you for the optimistic words. However, when I look at historical examples, this still seems like a bad news in long term:
rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later
Consider Alfred Korzybski, the author of Science and Sanity, founder of General Semantics. He was an “x-rationalist” of his era, 80 years ago. He inspired many successful things; for example Cognitive-Behavioral Therapy can be traced to his ideas. So, we can attribute a lot of “wins” to him and to people inspired by him.
He also completely failed at him main goal, preventing WW2. Also, it doesn’t seem like humanity became more rational, which was his instrumental goal for achieving the former. (On the second thought, maybe humanity actually is more rational than back then, and maybe he even contributed to this significantly, but I don’t see it because it became the new normal.)
If today’s rationalist movement will follow the same path, the analogical outcome would be a few very successful startup owners, and then… an unfriendly AI kills us all, because everyone was too busy using rationality for their personal goals, and didn’t contribute to the basic research and “raising the rationality waterline”.
And in the Everett branch where humanity fails to develop a smarter-than-human AI, 80 years later the rationalist movement will be mostly forgotten; there will be some pathetic remains of CFAR trying to make people read “Rationality from AI to Zombies” but no one will really care, simply because the fact that they had existed for so long without having conquered the world will be an evidence against them.
I’d like to do better than this. I think I am progressing in my personal life, a few of those improvements are even measurable, but it is really slow and takes a lot of time. And I believe a long-term solution consists of rationalist groups, not isolated individuals. Making money individually is great, but to change humanity we need some social technology that can replicate rationalist groups. Something like a scout movement equivalent for LW meetups would be a nice beginning.
I basically agree with all of this with one quibble. I think it is very easy to underestimate the impact that LessWrong has had. There are a lot of people (myself included) who don’t want to be associated with rationality, but whose thoughts it has nonetheless impacted. I know many of them in real life. LessWrong is weird enough that there is a social cost to having the first google result for your real name point to your LessWrong comments. If I am talking to someone I don’t know well about LessWrong or rationality in general I will not give it a full-throated defense in real life, and in venues where I do participate under my real name, I only link to rationalsphere articles selectively.
Partially because of this stigma, many people in startupland will read the sequences, put in in their toolbox, then move on with their lives. They don’t view continuing to participate as important, and surely much of the low-hanging fruit has long since been plucked. But if you look at the hints you well find tidbits of information that point to rationality having an impact.
Ezra Klein and Patrick Collison (CEO of Stripe) had an extensive conversation about rationality, and both are famous, notable figures.
A member of the Bay Area rationality community was rumored to be a member of the Trump cabinet.
Dominic Cummings (the architect of the Brexit “Leave” campaign) points to concept after concept that is both core to and adjacent to rationality, so much so that I would be genuinely surprised if he were not aware of it. (Perhaps this isn’t good for rationality depending on your political views, but don’t let it be said that he isn’t winning).
OpenAI was launched with $1B in funding from a Silicon Valley who’s who and they have been in dialogue with MIRI staff (and interestingly Stripe’s former CTO is the OpenAI CTO, obviously he knows about the rationalsphere). In general there has been tons of interest that has developed around AI alignment from multiple groups. Since this was the fundamental purpose of LessWrong to begin with, at least Eliezer is winning beyond what anyone could have ever expected based on his roundabout way of creating mindshare. We can’t say with certainty that this wouldn’t have happened without LessWrong, but personally I find it hard to believe that it didn’t make a huge impact on Eliezer’s influence within this field of thought.
Do we have an army of devout rationalists that are out there winning? No, it doesn’t seem so. But rationalism has had a lot of children that are winning, even if they aren’t looking back to improve rationalism later. Personally, I didn’t expect LessWrong to have had as much impact as it has. I realized how hard it is to put these ideas into action when I first read the sequences.
Thank you for the optimistic words. However, when I look at historical examples, this still seems like a bad news in long term:
Consider Alfred Korzybski, the author of Science and Sanity, founder of General Semantics. He was an “x-rationalist” of his era, 80 years ago. He inspired many successful things; for example Cognitive-Behavioral Therapy can be traced to his ideas. So, we can attribute a lot of “wins” to him and to people inspired by him.
He also completely failed at him main goal, preventing WW2. Also, it doesn’t seem like humanity became more rational, which was his instrumental goal for achieving the former. (On the second thought, maybe humanity actually is more rational than back then, and maybe he even contributed to this significantly, but I don’t see it because it became the new normal.)
If today’s rationalist movement will follow the same path, the analogical outcome would be a few very successful startup owners, and then… an unfriendly AI kills us all, because everyone was too busy using rationality for their personal goals, and didn’t contribute to the basic research and “raising the rationality waterline”.
And in the Everett branch where humanity fails to develop a smarter-than-human AI, 80 years later the rationalist movement will be mostly forgotten; there will be some pathetic remains of CFAR trying to make people read “Rationality from AI to Zombies” but no one will really care, simply because the fact that they had existed for so long without having conquered the world will be an evidence against them.
I’d like to do better than this. I think I am progressing in my personal life, a few of those improvements are even measurable, but it is really slow and takes a lot of time. And I believe a long-term solution consists of rationalist groups, not isolated individuals. Making money individually is great, but to change humanity we need some social technology that can replicate rationalist groups. Something like a scout movement equivalent for LW meetups would be a nice beginning.