Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.
My model of this big effort to raise the sanity waterline and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed forces.
The middle sphere contains communities focused on spreading important ideas and doing so by motivating a rationalist discourse (for example, ACX, Asterisk Magazine, or Vox’s Future Perfect). It aims, in other words, for this capacity to sway public opinion, to make key ideas enter popular discussion.
And the inner sphere is LessWrong, which shares the same aims as the middle sphere, and in addition is the main source of generation of ideas and patterns of thought. Some of these ideas (hopefully a concern for AI alignment, awareness of the control problem, or Bayesianism, for instance) will eventually trickle down to the general public; others, such as technical topics related to AI safety, don’t need to go down to that level because they belong to the higher end of the spectrum which is directly working to solve these issues.
So I very much agree with the vision to maintain LW as a sort of university, with high entry barriers in order to produce refined, high-quality ideas and debates, while at the same time keeping in mind that for some of these ideas to make a difference, they need to trickle down and reach the public debate.
Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.
My model of this big effort to raise the sanity waterline and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed forces.
The middle sphere contains communities focused on spreading important ideas and doing so by motivating a rationalist discourse (for example, ACX, Asterisk Magazine, or Vox’s Future Perfect). It aims, in other words, for this capacity to sway public opinion, to make key ideas enter popular discussion.
And the inner sphere is LessWrong, which shares the same aims as the middle sphere, and in addition is the main source of generation of ideas and patterns of thought. Some of these ideas (hopefully a concern for AI alignment, awareness of the control problem, or Bayesianism, for instance) will eventually trickle down to the general public; others, such as technical topics related to AI safety, don’t need to go down to that level because they belong to the higher end of the spectrum which is directly working to solve these issues.
So I very much agree with the vision to maintain LW as a sort of university, with high entry barriers in order to produce refined, high-quality ideas and debates, while at the same time keeping in mind that for some of these ideas to make a difference, they need to trickle down and reach the public debate.