It depends. My last post got 20 downvotes, but only one comment that didn’t really challenge me. That tells me people disagree with my heinous ramblings, but can’t prove me wrong.
Tbh, I’d even prefer it to happen sooner than later. The term singularity truly seems fitting, as I see a lot of timelines culminating right now. We’re still struggling with a pandemic and it’s economic and social consequences, the cold war has erupted again, but this time with inverted signs as the West is undergoing a marxist cultural revolution, the looming threat of WWIII, the looming threat of a civial war in the US, other nations doing their things as well (insert Donald Trump saying “China” here), and AGI arriving within the next five years (my estimation with confidence >90%). What a time to be alive.
I don’t think it’s soldier mindset. Posts critical of leading lights get lots of upvotes when they’re well-executed.
One possibility is that there’s a greater concentration of expertise in that specific topic on this website. It’s fun for AI safety people to blow off steam talking about all sorts of other subjects, and they can sort of let their hair down, but when AI safety comes up, it becomes important to have a more buttoned-up conversation that’s mindful of relative status in the field and is on the leading edge of what’s interesting to participants.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
It depends. My last post got 20 downvotes, but only one comment that didn’t really challenge me. That tells me people disagree with my heinous ramblings, but can’t prove me wrong.
it’s more that we don’t think it’s time yet, I think. of course humanity can’t stay in charge forever.
Tbh, I’d even prefer it to happen sooner than later. The term singularity truly seems fitting, as I see a lot of timelines culminating right now. We’re still struggling with a pandemic and it’s economic and social consequences, the cold war has erupted again, but this time with inverted signs as the West is undergoing a marxist cultural revolution, the looming threat of WWIII, the looming threat of a civial war in the US, other nations doing their things as well (insert Donald Trump saying “China” here), and AGI arriving within the next five years (my estimation with confidence >90%). What a time to be alive.
I don’t think it’s soldier mindset. Posts critical of leading lights get lots of upvotes when they’re well-executed.
One possibility is that there’s a greater concentration of expertise in that specific topic on this website. It’s fun for AI safety people to blow off steam talking about all sorts of other subjects, and they can sort of let their hair down, but when AI safety comes up, it becomes important to have a more buttoned-up conversation that’s mindful of relative status in the field and is on the leading edge of what’s interesting to participants.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
I think this is a big part of it.