I have learned to expect to receive mostly downvotes when I write about AI.
I can easily imagine general reasons why people might downvote me. They might disagree, dislike, or fear negative consequences of my posts. They might be bored of the topic and want only high-caliber expert writings. It might be that the concentrated AI expertise on LessWrong collectively can let their hair down on other topics, but demand professionalism on the specific topic of their expertise.
Because I don’t know who’s downvoting my AI posts, or their specific reasons why, I don’t actually learn anything from being downvoted. I experience it as an expected form of unpleasantness, like dog wearing a shock collar who just ran through the “invisible AI fence.” This is not true when I write on other topics, where I typically expect to receive 15-30 karma on average and gain new information when I receive less or more than that range.
I do not want to let karma affect my decision to write, or what I write about. I’m going to continue writing my thoughts, and have a sense of humor about the low karma scores that will result.
Since I am not any kind of expert on the subject, I will keep these writings on my personal shortform.
I don’t post much, but I comment frequently, and somewhat target 80% positive reactions. If I’m not getting downvoted, I’m probably not saying anything very interesting. Looking at your post history, I don’t see anything with negative totals, though some have low-ish scores. I also note that you have some voters a little trigger-happy with strong votes (18 karma in 5 votes), which is going to skew the perception.
I recommend you not try to learn very much from votes. It’s a lightweight indicator of popularity, not much more. If something is overwhelmingly negative, that’s a signal you’ve crossed some line, but mixed and slightly-positive probably means you’re outside the echo chamber.
Instead of karma, focus on comments and discussion/feedback value. If someone is interested enough to interact, that’s worth dozens of upvotes. AND you get to refine your topical beliefs based on the actual discussion, rather than (well, in addition to) updating your less-valuable beliefs about what LW wants to read.
I have learned to expect to receive mostly downvotes when I write about AI.
I can easily imagine general reasons why people might downvote me. They might disagree, dislike, or fear negative consequences of my posts. They might be bored of the topic and want only high-caliber expert writings. It might be that the concentrated AI expertise on LessWrong collectively can let their hair down on other topics, but demand professionalism on the specific topic of their expertise.
Because I don’t know who’s downvoting my AI posts, or their specific reasons why, I don’t actually learn anything from being downvoted. I experience it as an expected form of unpleasantness, like dog wearing a shock collar who just ran through the “invisible AI fence.” This is not true when I write on other topics, where I typically expect to receive 15-30 karma on average and gain new information when I receive less or more than that range.
I do not want to let karma affect my decision to write, or what I write about. I’m going to continue writing my thoughts, and have a sense of humor about the low karma scores that will result.
Since I am not any kind of expert on the subject, I will keep these writings on my personal shortform.
I don’t post much, but I comment frequently, and somewhat target 80% positive reactions. If I’m not getting downvoted, I’m probably not saying anything very interesting. Looking at your post history, I don’t see anything with negative totals, though some have low-ish scores. I also note that you have some voters a little trigger-happy with strong votes (18 karma in 5 votes), which is going to skew the perception.
I recommend you not try to learn very much from votes. It’s a lightweight indicator of popularity, not much more. If something is overwhelmingly negative, that’s a signal you’ve crossed some line, but mixed and slightly-positive probably means you’re outside the echo chamber.
Instead of karma, focus on comments and discussion/feedback value. If someone is interested enough to interact, that’s worth dozens of upvotes. AND you get to refine your topical beliefs based on the actual discussion, rather than (well, in addition to) updating your less-valuable beliefs about what LW wants to read.
A) your observation about negative feedback being undirected is correct, i.e., a well-known phenomenon.
B) can you give some examples?