If you upvote me, then I learn that you like or agree with the specific ideas I’ve articulated in my writing. If I write “blue is the best color,” and you agreevote, then I learn you also agree that the best color is blue.
But if you disagree, I only learn that you think blue is not the best color. Maybe you think red, orange, green or black is the best color. Maybe you don’t think there is a best color. Maybe you think blue is only the second-best color, or maybe you think it’s the worst color.
I usually don’t upvote or downvote mainly based on agreement, so there may be even less information about agreement than you might think!
I have upvoted quite a few posts where I disagree with the main conclusion or other statements within it, when those posts are generally informative or entertaining or otherwise worth reading. I have downvoted a lot of posts with conclusions I generally agreed with but were poorly written, repetitive, trivial, boring, overbearing, used flawed arguments, or other qualities that I don’t like to see in posts on this site.
A post that said nothing but “blue is the best colour” would definitely get a downvote from me for being both trivial and lacking any support for the position, even if I personally agree. I would at very least want to know by what criteria it was considered “best” along with some supporting evidence for why those criteria were generally relevant and that it actually does meet those criteria better than anything else.
Interesting—I never downvote based on being poorly written, repetitive, trivial, or boring. I do downvote for a hostile-seeming tone accompanied by a wrong or poorly-thought-through argument. I’ll disagreevote if I confidently disagree.
“Blue is the best color” was meant as a trivial example of a statement where there’s a lot of alternative “things that could be true” if the statement were false, not as an example of a good comment.
This doesn’t seem quite right. The information content of agree vs. disagree depends on your prior, i.e., on P(people agree). If that’s <0.5, then an agree vote is more informative; if it’s >0.5, then a disagree vote is more informative. But it’s not obvious that it’s <.5 in general.
Fair point! The scenario I’m imagining is one in which our prior is low because we’re dealing with a specific, complex statement like “BLUE is the BEST color.” There are a lot of ways that could be considered wrong, but only one way for it to be considered right, so by default we’d have a low prior and therefore learn a lot more from an agreevote than a disagreevote.
I think this is why it makes sense for a truth seeker to be happier with upvotes than downvotes, pleasure aside. If I get agreevotes, I am getting a lot of information in situations like these. If I get disagreevotes, especially when nobody’s taking the time to express why, then I’m learning very little while perceiving a hint that there is some gap in my knowledge.
Come to think of it, I feel like I tend to downvote most when I perceive that the statement has a lot of support (even if I’m the first voter). Somebody who makes a statement that I think will widely received as wrong, I will typically either ignore or respond to explicitly. Intuitively, that behavior seems appropriate: I use downvotes where they convey more information and use comments where downvotes would convey less.
It depends. My last post got 20 downvotes, but only one comment that didn’t really challenge me. That tells me people disagree with my heinous ramblings, but can’t prove me wrong.
Tbh, I’d even prefer it to happen sooner than later. The term singularity truly seems fitting, as I see a lot of timelines culminating right now. We’re still struggling with a pandemic and it’s economic and social consequences, the cold war has erupted again, but this time with inverted signs as the West is undergoing a marxist cultural revolution, the looming threat of WWIII, the looming threat of a civial war in the US, other nations doing their things as well (insert Donald Trump saying “China” here), and AGI arriving within the next five years (my estimation with confidence >90%). What a time to be alive.
I don’t think it’s soldier mindset. Posts critical of leading lights get lots of upvotes when they’re well-executed.
One possibility is that there’s a greater concentration of expertise in that specific topic on this website. It’s fun for AI safety people to blow off steam talking about all sorts of other subjects, and they can sort of let their hair down, but when AI safety comes up, it becomes important to have a more buttoned-up conversation that’s mindful of relative status in the field and is on the leading edge of what’s interesting to participants.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
Upvotes more informative than downvotes
If you upvote me, then I learn that you like or agree with the specific ideas I’ve articulated in my writing. If I write “blue is the best color,” and you agreevote, then I learn you also agree that the best color is blue.
But if you disagree, I only learn that you think blue is not the best color. Maybe you think red, orange, green or black is the best color. Maybe you don’t think there is a best color. Maybe you think blue is only the second-best color, or maybe you think it’s the worst color.
I usually don’t upvote or downvote mainly based on agreement, so there may be even less information about agreement than you might think!
I have upvoted quite a few posts where I disagree with the main conclusion or other statements within it, when those posts are generally informative or entertaining or otherwise worth reading. I have downvoted a lot of posts with conclusions I generally agreed with but were poorly written, repetitive, trivial, boring, overbearing, used flawed arguments, or other qualities that I don’t like to see in posts on this site.
A post that said nothing but “blue is the best colour” would definitely get a downvote from me for being both trivial and lacking any support for the position, even if I personally agree. I would at very least want to know by what criteria it was considered “best” along with some supporting evidence for why those criteria were generally relevant and that it actually does meet those criteria better than anything else.
Interesting—I never downvote based on being poorly written, repetitive, trivial, or boring. I do downvote for a hostile-seeming tone accompanied by a wrong or poorly-thought-through argument. I’ll disagreevote if I confidently disagree.
“Blue is the best color” was meant as a trivial example of a statement where there’s a lot of alternative “things that could be true” if the statement were false, not as an example of a good comment.
This doesn’t seem quite right. The information content of agree vs. disagree depends on your prior, i.e., on P(people agree). If that’s <0.5, then an agree vote is more informative; if it’s >0.5, then a disagree vote is more informative. But it’s not obvious that it’s <.5 in general.
Fair point! The scenario I’m imagining is one in which our prior is low because we’re dealing with a specific, complex statement like “BLUE is the BEST color.” There are a lot of ways that could be considered wrong, but only one way for it to be considered right, so by default we’d have a low prior and therefore learn a lot more from an agreevote than a disagreevote.
I think this is why it makes sense for a truth seeker to be happier with upvotes than downvotes, pleasure aside. If I get agreevotes, I am getting a lot of information in situations like these. If I get disagreevotes, especially when nobody’s taking the time to express why, then I’m learning very little while perceiving a hint that there is some gap in my knowledge.
Come to think of it, I feel like I tend to downvote most when I perceive that the statement has a lot of support (even if I’m the first voter). Somebody who makes a statement that I think will widely received as wrong, I will typically either ignore or respond to explicitly. Intuitively, that behavior seems appropriate: I use downvotes where they convey more information and use comments where downvotes would convey less.
It depends. My last post got 20 downvotes, but only one comment that didn’t really challenge me. That tells me people disagree with my heinous ramblings, but can’t prove me wrong.
it’s more that we don’t think it’s time yet, I think. of course humanity can’t stay in charge forever.
Tbh, I’d even prefer it to happen sooner than later. The term singularity truly seems fitting, as I see a lot of timelines culminating right now. We’re still struggling with a pandemic and it’s economic and social consequences, the cold war has erupted again, but this time with inverted signs as the West is undergoing a marxist cultural revolution, the looming threat of WWIII, the looming threat of a civial war in the US, other nations doing their things as well (insert Donald Trump saying “China” here), and AGI arriving within the next five years (my estimation with confidence >90%). What a time to be alive.
I don’t think it’s soldier mindset. Posts critical of leading lights get lots of upvotes when they’re well-executed.
One possibility is that there’s a greater concentration of expertise in that specific topic on this website. It’s fun for AI safety people to blow off steam talking about all sorts of other subjects, and they can sort of let their hair down, but when AI safety comes up, it becomes important to have a more buttoned-up conversation that’s mindful of relative status in the field and is on the leading edge of what’s interesting to participants.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
I think this is a big part of it.