EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
I think other people are saying “the sentences that Duncan says about himself are not true for me” while also saying “I am nevertheless glad that Duncan said it”. This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it’s not the case that Duncan-opinion is false.
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says “I think X”, I want to know when other people would say “I think not X”. I don’t want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.
I interpret “agree/disagree” in this context as literally ‘is this comment true, as far as you can tell, or is it false?’, so when I imagine changing it to “true/false” I don’t imagine it feeling any different to me. (Which also means I’m not personally opposed to such a change. 🤷)
Maybe relevant that I’m used to Arbital’s ‘assign a probability to this claim’ feature. I just tihnk of this as a more coarse-grained, fast version of Arbital’s tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if it’s also downvoted (often some flavor of ‘nooo you’re not fully understanding a thing I was trying to communicate!’), but happy about it if it’s upvoted. Something like:
Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between ‘I like this’ and ‘I think this is true’.
I mostly don’t perceive the disagreevotes as ‘you are flatly telling me to my face that I’m wrong’. Rather, I perceive it more like ‘these people are writing journal entries to themselves saying “Dear Diary, my current belief is X”’, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of ‘this isn’t a response to the post, it’s just a tool for people to keep track of what they think of things’
Maybe that’s the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
I think I can also better understand your perspective (though again, correct me if I’m wrong) if I imagine I’m in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then there’s a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern we’ll often see is ‘you’re being upvoted and disagreed with’, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-people’s-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the ‘Duncan-hounding trolls’ thing is low, it’s maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And it’s doubly maddening to have to worry that third parties might assign unduly low probability to the ‘Duncan-hounding trolls’ thing, and to related scenarios. And that they might prematurely discount Duncan’s view, or be inclined to strawman it, after seeing a −8 or whatever that tells them ‘social reality is that this comment is Wrong’.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I don’t have a crystal ball. But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
Like, I don’t think LW is reliably correct, and I don’t think of ‘people who use LW’ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
… resonated pretty strongly.
But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Yes.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that’s an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that.
More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that’s an important part of group rationality in practice.
I’ve gone into this in more detail elsewhere. Ultimately, the solution I like best is “Upvoting on this axis means ‘I could truthfully say this or something close to it from my own beliefs and experience.’”
I think other people are saying “the sentences that Duncan says about himself are not true for me” while also saying “I am nevertheless glad that Duncan said it”. This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it’s not the case that Duncan-opinion is false.
The distinction between “not true for me, the reader” and “not true at all” is not clear.
And that is the distinction between “agree/disagree” and “true/false.”
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says “I think X”, I want to know when other people would say “I think not X”. I don’t want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.
I’m sad this is your experience!
I interpret “agree/disagree” in this context as literally ‘is this comment true, as far as you can tell, or is it false?’, so when I imagine changing it to “true/false” I don’t imagine it feeling any different to me. (Which also means I’m not personally opposed to such a change. 🤷)
Maybe relevant that I’m used to Arbital’s ‘assign a probability to this claim’ feature. I just tihnk of this as a more coarse-grained, fast version of Arbital’s tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if it’s also downvoted (often some flavor of ‘nooo you’re not fully understanding a thing I was trying to communicate!’), but happy about it if it’s upvoted. Something like:
Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between ‘I like this’ and ‘I think this is true’.
I mostly don’t perceive the disagreevotes as ‘you are flatly telling me to my face that I’m wrong’. Rather, I perceive it more like ‘these people are writing journal entries to themselves saying “Dear Diary, my current belief is X”’, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of ‘this isn’t a response to the post, it’s just a tool for people to keep track of what they think of things’
Maybe that’s the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
I think I can also better understand your perspective (though again, correct me if I’m wrong) if I imagine I’m in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then there’s a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern we’ll often see is ‘you’re being upvoted and disagreed with’, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-people’s-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the ‘Duncan-hounding trolls’ thing is low, it’s maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And it’s doubly maddening to have to worry that third parties might assign unduly low probability to the ‘Duncan-hounding trolls’ thing, and to related scenarios. And that they might prematurely discount Duncan’s view, or be inclined to strawman it, after seeing a −8 or whatever that tells them ‘social reality is that this comment is Wrong’.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I don’t have a crystal ball. But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
Like, I don’t think LW is reliably correct, and I don’t think of ‘people who use LW’ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
… resonated pretty strongly.
Yes.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
Did you mean “confusingly ambiguous”? If not, then could you explain that bit?
I did mean confusingly ambiguous, which is an ironic typo. Thanks.
I think we should be in the business of not setting up brand-new motte-and-baileys, and enshrining them in site architecture.
Yes, I certainly agree with this.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that’s an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that.
More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that’s an important part of group rationality in practice.
If the labels were “true/false”, wouldn’t it still be unclear when people meant “not true for me, the reader” and when they meant “not true at all”?
I’ve gone into this in more detail elsewhere. Ultimately, the solution I like best is “Upvoting on this axis means ‘I could truthfully say this or something close to it from my own beliefs and experience.’”