I really want to create a more distinct and intentionally separate culture both on LessWrong and at the Rose Garden Inn, and I think owning a physical space hugely helps with that. FTX, various experiences I’ve had in the EA space over the past few years, as well as a lot of safetywashing in AI Alignment in more recent years, have made me much more hesitant to build a community that can as easily get swept up in respectability cascades and get exploited as easily by bad actors, and I really want to develop a more intentional culture in what we are building here. Hopefully this will enable the people I am supporting to work on things like AI Alignment without making the world overall worse, or displaying low-integrity behavior, or get taken advantage of.
I’m extremely excited by and supportive of this comment! An especially important related area I think is “solving the deference problem” or cascades of a sinking bar in forecasting and threatmodeling that I’ve felt over the last couple years.
I’m extremely excited by and supportive of this comment! An especially important related area I think is “solving the deference problem” or cascades of a sinking bar in forecasting and threatmodeling that I’ve felt over the last couple years.