It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
… resonated pretty strongly.
But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Yes.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
… resonated pretty strongly.
Yes.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
Did you mean “confusingly ambiguous”? If not, then could you explain that bit?
I did mean confusingly ambiguous, which is an ironic typo. Thanks.
I think we should be in the business of not setting up brand-new motte-and-baileys, and enshrining them in site architecture.
Yes, I certainly agree with this.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)