I suspect it’s a half-solution that will decay back to mostly-people-just-use-the-first-vote
Regardless of whether it’s a bad solution in other respects, I predict that people will use the agree/disagree vote a ton, reliably, forever.
I don’t think it lets me grok the quality of the reaction to a comment at a glance; I keep having to effortfully process “okay, what does—okay, this means that people like it but think it’s slightly false, unless they—hmm, a lot more people voted up-down than true-false, unless they all strong voted up-down but weak-voted tru—you know what, I can’t get any meaningful info out of this.”
I mostly care about agree/disagree votes (especially when it comes to specifics). From my perspective, the upvotes/downvotes are less important info; they’re mostly there to reward good behavior and make it easier to find the best content fast.
In that respect, the thing that annoys me about agree/disagree votes isn’t any particular relationship to the upvotes/downvotes; it’s that there isn’t a consistent way to distinguish ‘a few people agreeing strongly’ from ‘a larger number of people agreeing weakly’, ‘everyone agrees with this but weakly’ from ‘some agree strongly but they’re being partly offset by others who disagree’, or ‘this is the author agreeing with their own comment’ from ‘this is a peer independently vouching for the comment’s accuracy’.
I think all of those things would ideally be distinguishable, at least on hover. (Or the ambiguity would be eliminated by changing how the feature works—e.g., get rid of the strong/weak distinction for agreevotes, get rid of the thing where users can agreevote their own comments, etc.)
The specific thing I’d suggest is to get rid of ‘authors can agree/disagree vote on their own comments’ (LW already has a ‘disendorse’ feature), and to replace the current UI with a tiny bar graph showing the rough relative number of strong agree, weak agree, strong disagree, and weak disagree votes (at least on hover).
I predict that people will use the agree/disagree vote a ton, reliably, forever.
I feel zero motivation to use it. I feel zero value gained from it, in its current form. I actually find it a deterrent, e.g. looking at the information coming in on my comment above gave me a noticeable “ok just never comment on LW again” feeling.
(I now fear social punishment for admitting this fact, like people will decide that me having detected such an impulse means I’m some kind of petty or lame or bad or whatever, but eh, it’s true and relevant. I don’t find downvotes motivationally deterring in the same fashion, at all.)
EDIT: this has been true in other instances of looking at these numbers on my other comments in the past; not an isolated incident.
“Okay, so it’s … it’s plus eight, on some karma meaning … something, but negative nine on agreement? What the heck does this even mean, do people think it’s good but wrong, are some people upvoting but others downvoting in a different place—I hate this. I hate everything about this. Just give up and go somewhere where the information is clear and parse-able.”
Like, maybe it would feel better if I could see something that at least confirmed to me how many people voted in both places? So I’m not left with absolutely no idea how to compare the +8 to the −9?
But overall it just hurts/confuses and I’m having to actively fight my own you’d-be-happier-not-being-here feelings, which are very strong in a way that they aren’t in the one-vote system, and wouldn’t be in either my compass rose system or Rob’s heart/X system.
do people think it’s good but wrong [...] I hate this
The parent comment serves as a counterexample to this interpretation: It seems natural to agreement-downvote your comment to indicate that I don’t share this feeling/salient-impression, without meaning to communicate that I believe your feeling-report to be false (about your own impression). And to karma-upvote it to indicate that I care for existence of this feeling to become a known issue and to incentivise corroboration from others (with visibility given by karma-upvoting) who feel similarly (which might in part be communicated with agreement-upvoting).
there is some effortful, System-2 processing that I could do
The important distinction is about existence of System-1 distillation that enables ease, which develops with a bit of exposure, and of the character of that distillation. (Is it ugly/ruinous/not-forming, despite the training data being fine?) Whether a new thing is immediately familiar is much less strategically relevant.
This function has been available, and I’ve encountered it off and on, for months. This isn’t a case of “c’mon, give it a few tries before you judge it.” I’ve had more than a bit of exposure.
If being highly upvoted yet highly disagreed with make you feel deterred and never want to comment again, wouldn’t that also be the case if you see a lot of light orange beside your comments?
Since it seems unlikely you’ll forget your own proposal nor what the colours correspond to.
In fact it may hasten your departure since bright colours are a lot more difficult to ignore than a grey number.
I do not have a model/explanation for why, but no, apparently not. I’ve got pretty decent introspection and very good predicting-future-Duncan’s-responses skill and the light orange does not produce the same demoralization as negative numbers.
Though the negative numbers also produce less demoralization if the prompt is changed in accordance with some suggestions to something like “I could truthfully say this or something close to it from my own beliefs and experience.”
From my perspective, the upvotes/downvotes are less important info
Their role is different: it’s about quality/incentives, so the appropriate way of deciding visibility (comment ordering) and aggregating into user’s overall footprint/contribution. Agreement clarifies attitude to individual comments without compromising the quality vote, in particular making it straightforward/convenient to express approval/incentivization of disagreed-with comments. In this way agreement vote improves fidelity of the more strategic quality/incentives vote, while communicating an additional tactical fact about each particular comment.
Regardless of whether it’s a bad solution in other respects, I predict that people will use the agree/disagree vote a ton, reliably, forever.
I mostly care about agree/disagree votes (especially when it comes to specifics). From my perspective, the upvotes/downvotes are less important info; they’re mostly there to reward good behavior and make it easier to find the best content fast.
In that respect, the thing that annoys me about agree/disagree votes isn’t any particular relationship to the upvotes/downvotes; it’s that there isn’t a consistent way to distinguish ‘a few people agreeing strongly’ from ‘a larger number of people agreeing weakly’, ‘everyone agrees with this but weakly’ from ‘some agree strongly but they’re being partly offset by others who disagree’, or ‘this is the author agreeing with their own comment’ from ‘this is a peer independently vouching for the comment’s accuracy’.
I think all of those things would ideally be distinguishable, at least on hover. (Or the ambiguity would be eliminated by changing how the feature works—e.g., get rid of the strong/weak distinction for agreevotes, get rid of the thing where users can agreevote their own comments, etc.)
The specific thing I’d suggest is to get rid of ‘authors can agree/disagree vote on their own comments’ (LW already has a ‘disendorse’ feature), and to replace the current UI with a tiny bar graph showing the rough relative number of strong agree, weak agree, strong disagree, and weak disagree votes (at least on hover).
I feel zero motivation to use it. I feel zero value gained from it, in its current form. I actually find it a deterrent, e.g. looking at the information coming in on my comment above gave me a noticeable “ok just never comment on LW again” feeling.
(I now fear social punishment for admitting this fact, like people will decide that me having detected such an impulse means I’m some kind of petty or lame or bad or whatever, but eh, it’s true and relevant. I don’t find downvotes motivationally deterring in the same fashion, at all.)
EDIT: this has been true in other instances of looking at these numbers on my other comments in the past; not an isolated incident.
More detail on the underlying emotion:
“Okay, so it’s … it’s plus eight, on some karma meaning … something, but negative nine on agreement? What the heck does this even mean, do people think it’s good but wrong, are some people upvoting but others downvoting in a different place—I hate this. I hate everything about this. Just give up and go somewhere where the information is clear and parse-able.”
Like, maybe it would feel better if I could see something that at least confirmed to me how many people voted in both places? So I’m not left with absolutely no idea how to compare the +8 to the −9?
But overall it just hurts/confuses and I’m having to actively fight my own you’d-be-happier-not-being-here feelings, which are very strong in a way that they aren’t in the one-vote system, and wouldn’t be in either my compass rose system or Rob’s heart/X system.
The parent comment serves as a counterexample to this interpretation: It seems natural to agreement-downvote your comment to indicate that I don’t share this feeling/salient-impression, without meaning to communicate that I believe your feeling-report to be false (about your own impression). And to karma-upvote it to indicate that I care for existence of this feeling to become a known issue and to incentivise corroboration from others (with visibility given by karma-upvoting) who feel similarly (which might in part be communicated with agreement-upvoting).
I think you’re confusing “this should make sense to you, Duncan” with “therefore this makes sense to you, Duncan”
(or more broadly, “this should make sense to people” with “therefore, it will/will be good.”)
I agree that there is some effortful, System-2 processing that I could do, to draw out the meaning that you have spelled out above.
The important distinction is about existence of System-1 distillation that enables ease, which develops with a bit of exposure, and of the character of that distillation. (Is it ugly/ruinous/not-forming, despite the training data being fine?) Whether a new thing is immediately familiar is much less strategically relevant.
This function has been available, and I’ve encountered it off and on, for months. This isn’t a case of “c’mon, give it a few tries before you judge it.” I’ve had more than a bit of exposure.
If being highly upvoted yet highly disagreed with make you feel deterred and never want to comment again, wouldn’t that also be the case if you see a lot of light orange beside your comments?
Since it seems unlikely you’ll forget your own proposal nor what the colours correspond to.
In fact it may hasten your departure since bright colours are a lot more difficult to ignore than a grey number.
I do not have a model/explanation for why, but no, apparently not. I’ve got pretty decent introspection and very good predicting-future-Duncan’s-responses skill and the light orange does not produce the same demoralization as negative numbers.
Though the negative numbers also produce less demoralization if the prompt is changed in accordance with some suggestions to something like “I could truthfully say this or something close to it from my own beliefs and experience.”
Their role is different: it’s about quality/incentives, so the appropriate way of deciding visibility (comment ordering) and aggregating into user’s overall footprint/contribution. Agreement clarifies attitude to individual comments without compromising the quality vote, in particular making it straightforward/convenient to express approval/incentivization of disagreed-with comments. In this way agreement vote improves fidelity of the more strategic quality/incentives vote, while communicating an additional tactical fact about each particular comment.