I interpret âagree/âdisagreeâ in this context as literally âis this comment true, as far as you can tell, or is it false?â, so when I imagine changing it to âtrue/âfalseâ I donât imagine it feeling any different to me. (Which also means Iâm not personally opposed to such a change. đ¤ˇ)
Maybe relevant that Iâm used to Arbitalâs âassign a probability to this claimâ feature. I just tihnk of this as a more coarse-grained, fast version of Arbitalâs tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if itâs also downvoted (often some flavor of ânooo youâre not fully understanding a thing I was trying to communicate!â), but happy about it if itâs upvoted. Something like:
Amusement at the upvote/âagreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between âI like thisâ and âI think this is trueâ.
I mostly donât perceive the disagreevotes as âyou are flatly telling me to my face that Iâm wrongâ. Rather, I perceive it more like âthese people are writing journal entries to themselves saying âDear Diary, my current belief is Xââ, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of âthis isnât a response to the post, itâs just a tool for people to keep track of what they think of thingsâ
Maybe thatâs the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as âa normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuousâ, or as an attempt to manipulate social reality and say âwe, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this siteâ.
I think I can also better understand your perspective (though again, correct me if Iâm wrong) if I imagine Iâm in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then thereâs a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern weâll often see is âyouâre being upvoted and disagreed withâ, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-peopleâs-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the âDuncan-hounding trollsâ thing is low, itâs maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And itâs doubly maddening to have to worry that third parties might assign unduly low probability to the âDuncan-hounding trollsâ thing, and to related scenarios. And that they might prematurely discount Duncanâs view, or be inclined to strawman it, after seeing a â8 or whatever that tells them âsocial reality is that this comment is Wrongâ.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I donât have a crystal ball. But thatâs an example of a scenario where Iâd feel way more anxious about the new system, and where Iâd feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that youâve had lots of negative experiences on LW and donât trust the consensus here, while I feel more OK about it?
Like, I donât think LW is reliably correct, and I donât think of âpeople who use LWâ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position thatâs constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
I imagine you seeing a disagree-vote and reading it instead as âa normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuousâ, or as an attempt to manipulate social reality and say âwe, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this siteâ.
⌠resonated pretty strongly.
But thatâs an example of a scenario where Iâd feel way more anxious about the new system, and where Iâd feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Yes.
Plausibly a big part of why we feel differently about the system is that youâve had lots of negative experiences on LW and donât trust the consensus here, while I feel more OK about it?
If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position thatâs constantly under siege, etc., then I could imagine having a totally different experience.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/âdisapprove being âI could truthfully say this or something close to it from my own beliefs and experience.â
Worth noting that I was on board with the def of approve/âdisapprove being âI could truthfully say this or something close to it from my own beliefs and experience.â
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/âdownvote system, and almost all variants of the UI for this one, etc.) that⌠a big part of the problem with a system like this, is that⌠âwhat people take to be the meaning of a vote (of any kind and in any direction)â is not something that you (as the hypothetical systemâs designer) can control, or determine, or hold stable, or predict, etc.
Indeed itâs not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how peopleâs usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
⌠etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (âbooâ/ââyayâ), simply because itâs not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other usersâs feedback they want to modulate their experience).
I donât think this follows⌠after all, once youâre whitelisting a relatively small set of users you want to hear from, why not just get those usersâ comments, and skip the voting?
(And if youâre talking about a large set of âpreferred respondentsâ, then⌠Iâm not sure how this could be managed, in a practical sense?)
Thatâs why itâs a hard problem. The idea would be to get leverage by letting you say âI trust this userâs judgement, including about whose judgement to trustâ. Then you use something like (personalized) PageRank /â eigenmorality https://ââscottaaronson.blog/ââ?p=1820 to get useful information despite the circularity of âtrusting who to trust about who to trust about âŚâ, and which leverages all the usersâs ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like itâs easier for me to metabolize âfine, these people are using the button âwrongâ according to the explicit request made by the siteâ somehow, than it is to metabolize the confusingly ambiguous open-ended âagree/âdisagreeâ which, from comments all throughout this post, clearly means like six different clusters of Thing.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys⌠or something like them, anywayâis there even an âofficialâ meaning of the karma vote buttons? Probably there is, but itâs not well-known enough to even be a âmotteâ, it seems to me⌠well, anyhow, as I saidâmaybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
Iâm sad this is your experience!
I interpret âagree/âdisagreeâ in this context as literally âis this comment true, as far as you can tell, or is it false?â, so when I imagine changing it to âtrue/âfalseâ I donât imagine it feeling any different to me. (Which also means Iâm not personally opposed to such a change. đ¤ˇ)
Maybe relevant that Iâm used to Arbitalâs âassign a probability to this claimâ feature. I just tihnk of this as a more coarse-grained, fast version of Arbitalâs tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if itâs also downvoted (often some flavor of ânooo youâre not fully understanding a thing I was trying to communicate!â), but happy about it if itâs upvoted. Something like:
Amusement at the upvote/âagreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between âI like thisâ and âI think this is trueâ.
I mostly donât perceive the disagreevotes as âyou are flatly telling me to my face that Iâm wrongâ. Rather, I perceive it more like âthese people are writing journal entries to themselves saying âDear Diary, my current belief is Xââ, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of âthis isnât a response to the post, itâs just a tool for people to keep track of what they think of thingsâ
Maybe thatâs the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as âa normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuousâ, or as an attempt to manipulate social reality and say âwe, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this siteâ.
I think I can also better understand your perspective (though again, correct me if Iâm wrong) if I imagine Iâm in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then thereâs a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern weâll often see is âyouâre being upvoted and disagreed withâ, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-peopleâs-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the âDuncan-hounding trollsâ thing is low, itâs maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And itâs doubly maddening to have to worry that third parties might assign unduly low probability to the âDuncan-hounding trollsâ thing, and to related scenarios. And that they might prematurely discount Duncanâs view, or be inclined to strawman it, after seeing a â8 or whatever that tells them âsocial reality is that this comment is Wrongâ.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I donât have a crystal ball. But thatâs an example of a scenario where Iâd feel way more anxious about the new system, and where Iâd feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that youâve had lots of negative experiences on LW and donât trust the consensus here, while I feel more OK about it?
Like, I donât think LW is reliably correct, and I donât think of âpeople who use LWâ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position thatâs constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
⌠resonated pretty strongly.
Yes.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/âdisapprove being âI could truthfully say this or something close to it from my own beliefs and experience.â
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/âdownvote system, and almost all variants of the UI for this one, etc.) that⌠a big part of the problem with a system like this, is that⌠âwhat people take to be the meaning of a vote (of any kind and in any direction)â is not something that you (as the hypothetical systemâs designer) can control, or determine, or hold stable, or predict, etc.
Indeed itâs not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how peopleâs usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
⌠etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (âbooâ/ââyayâ), simply because itâs not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other usersâs feedback they want to modulate their experience).
I donât think this follows⌠after all, once youâre whitelisting a relatively small set of users you want to hear from, why not just get those usersâ comments, and skip the voting?
(And if youâre talking about a large set of âpreferred respondentsâ, then⌠Iâm not sure how this could be managed, in a practical sense?)
Thatâs why itâs a hard problem. The idea would be to get leverage by letting you say âI trust this userâs judgement, including about whose judgement to trustâ. Then you use something like (personalized) PageRank /â eigenmorality https://ââscottaaronson.blog/ââ?p=1820 to get useful information despite the circularity of âtrusting who to trust about who to trust about âŚâ, and which leverages all the usersâs ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like itâs easier for me to metabolize âfine, these people are using the button âwrongâ according to the explicit request made by the siteâ somehow, than it is to metabolize the confusingly ambiguous open-ended âagree/âdisagreeâ which, from comments all throughout this post, clearly means like six different clusters of Thing.
Did you mean âconfusingly ambiguousâ? If not, then could you explain that bit?
I did mean confusingly ambiguous, which is an ironic typo. Thanks.
I think we should be in the business of not setting up brand-new motte-and-baileys, and enshrining them in site architecture.
Yes, I certainly agree with this.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys⌠or something like them, anywayâis there even an âofficialâ meaning of the karma vote buttons? Probably there is, but itâs not well-known enough to even be a âmotteâ, it seems to me⌠well, anyhow, as I saidâmaybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)