I think the lesswrong community is wrong about x-risk and many of the problems about ai, and I’ve got a draft longform with concrete claims that I’m working on...
But I’m sure it’ll be downvoted because the bet has goalpost-moving baked in, and lots of goddamn swearing, so that makes me hesitant to post it.
if you think it’s low quality, post it, and warn that you think it might be low quality, but like, maybe in less self-dismissive phrasing than “I’m sure it’ll be downvoted”. I sometimes post “I understand if this gets downvoted—I’m not sure how high quality it is” types of comments. I don’t think those are weird or bad, just try to be honest in both directions, don’t diss yourself unnecessarily.
And anyway, this community is a lot more diverse than you think. it’s the rationalist ai doomers who are rationalist ai doomers—not the entire lesswrong alignment community. Those who are paying attention to the research and making headway on the problem, eg wentworth, seem considerably more optimistic. The alarmists have done a good job being alarmists, but there’s only so much being an alarmist to do before you need to come back down to being uncertain and try to figure out what’s actually true, and I’m not impressed with MIRI lately at all.
A word of advice: don’t post any version of it that says “I’m sure this will be downvoted”. Saying that sort of thing is a reliable enough signal of low quality that if your post is actually good then it will get a worse reception than it deserves because of it.
I think the lesswrong community is wrong about x-risk and many of the problems about ai, and I’ve got a draft longform with concrete claims that I’m working on...
But I’m sure it’ll be downvoted because the bet has goalpost-moving baked in, and lots of goddamn swearing, so that makes me hesitant to post it.
if you think it’s low quality, post it, and warn that you think it might be low quality, but like, maybe in less self-dismissive phrasing than “I’m sure it’ll be downvoted”. I sometimes post “I understand if this gets downvoted—I’m not sure how high quality it is” types of comments. I don’t think those are weird or bad, just try to be honest in both directions, don’t diss yourself unnecessarily.
And anyway, this community is a lot more diverse than you think. it’s the rationalist ai doomers who are rationalist ai doomers—not the entire lesswrong alignment community. Those who are paying attention to the research and making headway on the problem, eg wentworth, seem considerably more optimistic. The alarmists have done a good job being alarmists, but there’s only so much being an alarmist to do before you need to come back down to being uncertain and try to figure out what’s actually true, and I’m not impressed with MIRI lately at all.
Thanks. fyi, i tried making the post i alluded to:
https://www.lesswrong.com/posts/F7xySqiEDhJBnRyKL/i-think-we-re-approaching-the-bitter-lesson-s-asymptote
“the bet”—what bet?
A word of advice: don’t post any version of it that says “I’m sure this will be downvoted”. Saying that sort of thing is a reliable enough signal of low quality that if your post is actually good then it will get a worse reception than it deserves because of it.
For sure. The actual post I make will not demonstrate my personal insecurities.
I will propose a broad test/bet that will shed light on my claims or give some places to examine.