Since writing that, I still feel a widening gap between my views and those of the LW zeitgeist. I’m not convinced that AI is inevitably killing everybody if it gets smart enough, as the smartest people around here seem to believe.
Back when AI was a “someday” thing, I feel like people discussed its risks here with understanding of perspectives like mine, and I felt like my views were gradually converging toward those of the site as I read more. It felt like people who disagreed about x-risk were regarded as potential allies worth listening to, in a way that I don’t experience from more recent content.
Since AI has become a “right now” thing, I feel like there’s an attitude that if you aren’t already sold on AI destroying everything then you’re not worth discussing it with. This may be objectively correct: if someone with the power to help stop AI from destroying us and finite effort to exert spends their time considering ignorant/uninformed/unenlightened perspectives such as my own, diverting that effort from doing more important things may be directly detrimental to the survival of the species.
In short, I get how people smarter than I am are assigning high probability to us being in a timeline where LW needs to stop being the broader forum that I joined it for. I figure they’re probably doing the right thing, and I’m probably in the wrong place for what LW is needing to become. Complaining about losing what LW was to make way for what it needs to be feels like complaining about factories transitioning from making luxury items to making essential supplies during a crisis.
And it feels like if this was whole experience was a fable, the moral would be about alignment and human cooperation in some way ;)
A little while ago I vented at my shortform on this topic, https://www.lesswrong.com/posts/pjCnAXMkXjbmLw3ii/nim-s-shortform?commentId=EczMSzhPMpRAEBhhj.
Since writing that, I still feel a widening gap between my views and those of the LW zeitgeist. I’m not convinced that AI is inevitably killing everybody if it gets smart enough, as the smartest people around here seem to believe.
Back when AI was a “someday” thing, I feel like people discussed its risks here with understanding of perspectives like mine, and I felt like my views were gradually converging toward those of the site as I read more. It felt like people who disagreed about x-risk were regarded as potential allies worth listening to, in a way that I don’t experience from more recent content.
Since AI has become a “right now” thing, I feel like there’s an attitude that if you aren’t already sold on AI destroying everything then you’re not worth discussing it with. This may be objectively correct: if someone with the power to help stop AI from destroying us and finite effort to exert spends their time considering ignorant/uninformed/unenlightened perspectives such as my own, diverting that effort from doing more important things may be directly detrimental to the survival of the species.
In short, I get how people smarter than I am are assigning high probability to us being in a timeline where LW needs to stop being the broader forum that I joined it for. I figure they’re probably doing the right thing, and I’m probably in the wrong place for what LW is needing to become. Complaining about losing what LW was to make way for what it needs to be feels like complaining about factories transitioning from making luxury items to making essential supplies during a crisis.
And it feels like if this was whole experience was a fable, the moral would be about alignment and human cooperation in some way ;)