I’m a 100% with you. I don’t like the current trend of LW becoming a blog about AI, and much less about a blog about how AGI doom is inevitable, (and in my opinion there have been too many blog posts about that, with some exceptions of course). I have found myself lately downvoting AI related posts more easily and upvoting content non related to AI more easily too
I think the solution to “too much AI content” is not to downvote the AI content less discriminately. If there were many posts with correct proofs in harmonic analysis being posted to LessWrong, I would not want to downvote them, after all, they are not wrong in any important sense, and maybe even important for the world!
But I would like to filter them out, at least until I’ve learned the basics of harmonic analysis to understand them better (if I desired to do so).
For what it’s worth, I think I am actually in favor of downvoting content of which you think there is too much. The general rule for voting is “upvote this if you want to see more like this” and “downvote this if you want to see less like this”. I think it’s too easy to end up in a world where the site is filled with content that nobody likes, but everyone thinks someone else might like. I think it’s better for people to just vote based on their preferences, and we will get it right in the aggregate.
Sorry, I think I wasn’t clear enough. I meant that my threshold to downvote an AI related post was somehow lower, not that I was downvoting them indiscriminately.
I’m a 100% with you. I don’t like the current trend of LW becoming a blog about AI, and much less about a blog about how AGI doom is inevitable, (and in my opinion there have been too many blog posts about that, with some exceptions of course). I have found myself lately downvoting AI related posts more easily and upvoting content non related to AI more easily too
I weakly downvoted your comment:
I think the solution to “too much AI content” is not to downvote the AI content less discriminately. If there were many posts with correct proofs in harmonic analysis being posted to LessWrong, I would not want to downvote them, after all, they are not wrong in any important sense, and maybe even important for the world!
But I would like to filter them out, at least until I’ve learned the basics of harmonic analysis to understand them better (if I desired to do so).
For what it’s worth, I think I am actually in favor of downvoting content of which you think there is too much. The general rule for voting is “upvote this if you want to see more like this” and “downvote this if you want to see less like this”. I think it’s too easy to end up in a world where the site is filled with content that nobody likes, but everyone thinks someone else might like. I think it’s better for people to just vote based on their preferences, and we will get it right in the aggregate.
Generally, I would want people to vote on articles they have actually read.
If posts, nobody wants to read because they seem very technical, get zero votes I think that’s a good outcome. They don’t need to be downvoted.
Sorry, I think I wasn’t clear enough. I meant that my threshold to downvote an AI related post was somehow lower, not that I was downvoting them indiscriminately.
I still think that’s bad, but I was also wrong to downvote you (your comment was true and informative!). So I removed the downvote.