I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.
I guess I feel like we’re at an event for the physics institute and someone’s being nerdy/awkward in the corner, and there’s a question of whether or not we should let that person be or whether we should publicly tell them off / kick them out. I feel like the best people there are a bit nerdy and overly analytical, and that’s fine, and deciding to publicly tell them off is over the top and will make all the physicists more uptight and self-aware.
To pick a very concrete problem we’ve worked on: the AI alignment problem is totally taken seriously by very important people who are also aware that LW is weird, but Eliezer goes on the Sam Harris podcast and Bostrom is invited by the UK government to advise and so on and Karnofsky’s got a billion dollars and focusing to a large part on the AI problem. We’re not being defined by this odd stuff, and I think we don’t need to feel like we are. I expect as we find similar concrete problems or proposals, we’ll continue to be taken very seriously and have major success.
As I see it, we’ve had this success partly because many of us have been scrupulous about not being needlessly offensive. (Bostrom is a good example here.) The rationalist brand is already weak (e.g. search Twitter for relevant terms), and if LessWrong had actually tried to have forthright discussions of every interesting topic, that might well have been fatal.