e.g. a −1 just appeared on the top-level comment in the “agree/disagree” category and it makes me want to take my ball and go home and never come back.
I’m taking that feeling as object, rather than being fully subject to it, but when I anticipate fighting against that feeling every time I leave a comment, I conclude “this is a bad place for me to be.”
EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
EDIT: now at −5, and yes, indeed, it is making me want to LEAVE LESSWRONG.
This means you’re using others’ reactions to define what you are or are not okay with.
I mean, if you think this −1−3 −5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?
And if you think it’s not reflecting something true, doesn’t your reaction highlight a place where your reactions need calibrating?
I’m pretty sure you’re actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.
But this is a place where a collective culture of emotional codependence actively screws with epistemics.
Which is to say, I disagree in a principled way with your sense of “wrongness” here, in the sense you name in your previous comment:
Like, the complaint here is not necessarily “y’all’re doing it Wrong” with a capital W so much as “y’all’re doing it in a way that seems wrong to me, given what I think ‘wrong’ is,” and there might just be genuine disagreement about wrongness.
I think a good truth-tracking culture acknowledges, but doesn’t try to ameliorate, the discomfort you’re naming in the comment I’m replying to.
(Whether LW agrees with me here is another matter entirely! This is just me.)
I mean, if you think this −1−3 −5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?
No, not quite.
There’s a difference (for instance) between knowledge and common knowledge, and there’s a difference (for instance) between animosity and punching.
Or maybe this is what you meant with “actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.”
A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly).
It’s better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement.
Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub.
(It’s worth noting here that there is a mirror to all of this, i.e. there’s the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW’s structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman’s patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).)
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
It is up in the air whether RightnessAndWrongnessAccordingToDuncan is closer to actually right than RightnessAndWrongnessAccordingToTheLWMob.
But it seems to me that the vote button in its current implementation, and evaluated according to the votes coming in, was more likely to be in the non-overlap between those two, and in the LWMob part, which means an asymmetric weapon in the wrong direction.
Sorry, this comment is sort of quickly tossed off; please let me know if it doesn’t make sense.
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you.
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
Abstractly that seems maybe good.
My gut sense is you can’t do that by targeting how coalitions form. That engenders Goodhart drift. You’ve got to do it by making truth easier to notice in some asymmetric way.
I don’t know how to do that.
I agree that this voting system doesn’t address your concern.
It’s unclear to me how big a problem it is though. Maybe it’s huge. I don’t know.
EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
I think other people are saying “the sentences that Duncan says about himself are not true for me” while also saying “I am nevertheless glad that Duncan said it”. This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it’s not the case that Duncan-opinion is false.
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says “I think X”, I want to know when other people would say “I think not X”. I don’t want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.
I interpret “agree/disagree” in this context as literally ‘is this comment true, as far as you can tell, or is it false?’, so when I imagine changing it to “true/false” I don’t imagine it feeling any different to me. (Which also means I’m not personally opposed to such a change. 🤷)
Maybe relevant that I’m used to Arbital’s ‘assign a probability to this claim’ feature. I just tihnk of this as a more coarse-grained, fast version of Arbital’s tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if it’s also downvoted (often some flavor of ‘nooo you’re not fully understanding a thing I was trying to communicate!’), but happy about it if it’s upvoted. Something like:
Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between ‘I like this’ and ‘I think this is true’.
I mostly don’t perceive the disagreevotes as ‘you are flatly telling me to my face that I’m wrong’. Rather, I perceive it more like ‘these people are writing journal entries to themselves saying “Dear Diary, my current belief is X”’, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of ‘this isn’t a response to the post, it’s just a tool for people to keep track of what they think of things’
Maybe that’s the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
I think I can also better understand your perspective (though again, correct me if I’m wrong) if I imagine I’m in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then there’s a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern we’ll often see is ‘you’re being upvoted and disagreed with’, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-people’s-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the ‘Duncan-hounding trolls’ thing is low, it’s maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And it’s doubly maddening to have to worry that third parties might assign unduly low probability to the ‘Duncan-hounding trolls’ thing, and to related scenarios. And that they might prematurely discount Duncan’s view, or be inclined to strawman it, after seeing a −8 or whatever that tells them ‘social reality is that this comment is Wrong’.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I don’t have a crystal ball. But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
Like, I don’t think LW is reliably correct, and I don’t think of ‘people who use LW’ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
… resonated pretty strongly.
But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Yes.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that’s an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that.
More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that’s an important part of group rationality in practice.
I’ve gone into this in more detail elsewhere. Ultimately, the solution I like best is “Upvoting on this axis means ‘I could truthfully say this or something close to it from my own beliefs and experience.’”
I think I experience silent, contentless net-disagreement as very hard to interface with. It doesn’t specify what’s wrong with my comment, it doesn’t tell me what the disagreer’s crux is, it doesn’t give me any handholds or ways-to-resolve-the-disagreement. It’s just a “you’ve-been-kicked” sign sitting on my comment forever.
Whereas “the consensus of LW users asked to evaluate this comment for truth is that it is more false than true” is at least conveying something interesting. It can tell me to, for instance, go add more sources and argument in defense of my claims.
Yeah, I think this is a problem, but I think contentless net-disapproval is substantially worse than that (at least for me, I can imagine it being worse for some people, but overall expect people to strongly prefer contentless net-disagreement to contentless net-disapproval).
Like, I think one outcome of this voting system change is that some contentless net-disapproval gets transformed into contentless net-disagreement, which I think has a substantially better effect on the discourse (especially if combined with high approval, which I think carves out a real place for people who say lots of stuff that others disagree with, which I think is good).
(Preamble: I am sort of hesitant to go too far in this subthread for fear of pushing your apparent strong reaction further. Would it be appropriate to cool down for a while elsewhere before coming back to this? I hope that’s not too intrusive to say, and I hope my attempt below to figure out what’s happening isn’t too intrusively psychoanalytical.)
I would like to gently suggest that the mental motion of not treating disagreement (even when it’s quite vague) as “being kicked”—and learning to do some combination of regulating that feeling and not associating it to begin with—forms, at least for me, a central part of the practical reason for distinguishing discursive quality from truth in the first place. By contrast, a downvote in the approval sense is meant to (but that doesn’t mean “will consistently be treated as”, of course!) potentially be the social nudge side—the negative-reinforcement “it would have been better if you hadn’t posted that” side.
I was initially confused as well as to how the four-pointed star version you suggested elsewhere would handle this, but combining the two, I think I see a possibility, now. Would it be accurate to say that you have difficulty processing what feels like negative reinforcement on one axis when it is not specifically coupled with either confirmatory negative or relieving positive reinforcement on the other, and that your confusion around the two-axis system involves a certain amount of reflexive “when I see a negative on one axis, I feel compelled to figure out which direction it means on the other axis to determine whether I should feel bad”? Because if so, that makes me wonder how many people do that by default.
I think it’s easy for me to parse approval/disapproval, and it’s easy for me to parse assertions-of-falsehood/assertions-of-truth. I think it’s hard for me to parse something like “agree/disagree” which feels set up to motte-bailey between those.
Okay. I think I understand better now, and especially how this relates to the “trust” you mention elsewhere. In other words, something more like: you think/feel that not locking the definition down far enough will lead to lack of common knowledge on interpretation combined with a more pervasive social need to understand the interpretation to synchronize? Or something like: this will have the same flaws as karma, only people will delude themselves that it doesn’t?
Strange-Loop relevant: this very comment above is one where I went back to “disagree” with myself after Duncan’s reply. What I meant by that is that I originally thought the idea I was stating was likely to be both true and relevant, but now I have changed my mind and think it is not likely to be true, but I don’t think that making the post in the first place was a bad idea with what I knew at the time (and thus I haven’t downvoted myself on the other axis). However, I then remembered that retraction was also an option. I decided to use that too in this case, but I’m not sure that makes full sense here; there’s something about the crossed-out text that gives me a different impression I’m not sure how to unpack right now. Feedback on whether that was a “correct” action or not is welcome.
Disagreement is not necessarily about truth, it’s often about (not) sharing a subjective opinion. In that case resolving it doesn’t make any sense, the things in disagreement can coexist, just as you and the disagreer are different people. The expectation that agreement is (always) about truth is just mistranslation, the meaning is different. Of course falsity/fallaciousness implies disagreement with people who see truth/validity, so it’s some evidence about error if the claims you were making are not subjective (author-referring).
contentless net-disagreement as very hard to interface with
For subjective claims, the alternative to disagreement being comfortable is emotional experience of intolerance, intuitive channeling of conformance-norm-enforcement (whether externally enacted, or self-targeted, or neither).
When the comment is about truth, then agreement/disagreement is automatically about truth. There are comments that are not about truth, being about truth is a special case that shouldn’t be in the general interface, especially if it happens to already be the intended special case of this more general thing I’m pointing at.
I definitely don’t think that “When the comment is about truth, then agreement/disagreement is automatically about truth” is a true statement about humans in general, though it might be aspirationally true of LWers?
e.g. a −1 just appeared on the top-level comment in the “agree/disagree” category and it makes me want to take my ball and go home and never come back.
I’m taking that feeling as object, rather than being fully subject to it, but when I anticipate fighting against that feeling every time I leave a comment, I conclude “this is a bad place for me to be.”
EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
EDIT: now at −5, and yes, indeed, it is making me want to LEAVE LESSWRONG.
This means you’re using others’ reactions to define what you are or are not okay with.
I mean, if you think this
−1−3−5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?And if you think it’s not reflecting something true, doesn’t your reaction highlight a place where your reactions need calibrating?
I’m pretty sure you’re actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.
But this is a place where a collective culture of emotional codependence actively screws with epistemics.
Which is to say, I disagree in a principled way with your sense of “wrongness” here, in the sense you name in your previous comment:
I think a good truth-tracking culture acknowledges, but doesn’t try to ameliorate, the discomfort you’re naming in the comment I’m replying to.
(Whether LW agrees with me here is another matter entirely! This is just me.)
No, not quite.
There’s a difference (for instance) between knowledge and common knowledge, and there’s a difference (for instance) between animosity and punching.
Or maybe this is what you meant with “actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.”
A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly).
It’s better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement.
Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub.
(It’s worth noting here that there is a mirror to all of this, i.e. there’s the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW’s structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman’s patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).)
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
It is up in the air whether RightnessAndWrongnessAccordingToDuncan is closer to actually right than RightnessAndWrongnessAccordingToTheLWMob.
But it seems to me that the vote button in its current implementation, and evaluated according to the votes coming in, was more likely to be in the non-overlap between those two, and in the LWMob part, which means an asymmetric weapon in the wrong direction.
Sorry, this comment is sort of quickly tossed off; please let me know if it doesn’t make sense.
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you.
Abstractly that seems maybe good.
My gut sense is you can’t do that by targeting how coalitions form. That engenders Goodhart drift. You’ve got to do it by making truth easier to notice in some asymmetric way.
I don’t know how to do that.
I agree that this voting system doesn’t address your concern.
It’s unclear to me how big a problem it is though. Maybe it’s huge. I don’t know.
I think other people are saying “the sentences that Duncan says about himself are not true for me” while also saying “I am nevertheless glad that Duncan said it”. This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it’s not the case that Duncan-opinion is false.
The distinction between “not true for me, the reader” and “not true at all” is not clear.
And that is the distinction between “agree/disagree” and “true/false.”
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says “I think X”, I want to know when other people would say “I think not X”. I don’t want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.
I’m sad this is your experience!
I interpret “agree/disagree” in this context as literally ‘is this comment true, as far as you can tell, or is it false?’, so when I imagine changing it to “true/false” I don’t imagine it feeling any different to me. (Which also means I’m not personally opposed to such a change. 🤷)
Maybe relevant that I’m used to Arbital’s ‘assign a probability to this claim’ feature. I just tihnk of this as a more coarse-grained, fast version of Arbital’s tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if it’s also downvoted (often some flavor of ‘nooo you’re not fully understanding a thing I was trying to communicate!’), but happy about it if it’s upvoted. Something like:
Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between ‘I like this’ and ‘I think this is true’.
I mostly don’t perceive the disagreevotes as ‘you are flatly telling me to my face that I’m wrong’. Rather, I perceive it more like ‘these people are writing journal entries to themselves saying “Dear Diary, my current belief is X”’, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of ‘this isn’t a response to the post, it’s just a tool for people to keep track of what they think of things’
Maybe that’s the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
I think I can also better understand your perspective (though again, correct me if I’m wrong) if I imagine I’m in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then there’s a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern we’ll often see is ‘you’re being upvoted and disagreed with’, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-people’s-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the ‘Duncan-hounding trolls’ thing is low, it’s maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And it’s doubly maddening to have to worry that third parties might assign unduly low probability to the ‘Duncan-hounding trolls’ thing, and to related scenarios. And that they might prematurely discount Duncan’s view, or be inclined to strawman it, after seeing a −8 or whatever that tells them ‘social reality is that this comment is Wrong’.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I don’t have a crystal ball. But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
Like, I don’t think LW is reliably correct, and I don’t think of ‘people who use LW’ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
… resonated pretty strongly.
Yes.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
Did you mean “confusingly ambiguous”? If not, then could you explain that bit?
I did mean confusingly ambiguous, which is an ironic typo. Thanks.
I think we should be in the business of not setting up brand-new motte-and-baileys, and enshrining them in site architecture.
Yes, I certainly agree with this.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that’s an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that.
More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that’s an important part of group rationality in practice.
If the labels were “true/false”, wouldn’t it still be unclear when people meant “not true for me, the reader” and when they meant “not true at all”?
I’ve gone into this in more detail elsewhere. Ultimately, the solution I like best is “Upvoting on this axis means ‘I could truthfully say this or something close to it from my own beliefs and experience.’”
I think I experience silent, contentless net-disagreement as very hard to interface with. It doesn’t specify what’s wrong with my comment, it doesn’t tell me what the disagreer’s crux is, it doesn’t give me any handholds or ways-to-resolve-the-disagreement. It’s just a “you’ve-been-kicked” sign sitting on my comment forever.
Whereas “the consensus of LW users asked to evaluate this comment for truth is that it is more false than true” is at least conveying something interesting. It can tell me to, for instance, go add more sources and argument in defense of my claims.
Yeah, I think this is a problem, but I think contentless net-disapproval is substantially worse than that (at least for me, I can imagine it being worse for some people, but overall expect people to strongly prefer contentless net-disagreement to contentless net-disapproval).
Like, I think one outcome of this voting system change is that some contentless net-disapproval gets transformed into contentless net-disagreement, which I think has a substantially better effect on the discourse (especially if combined with high approval, which I think carves out a real place for people who say lots of stuff that others disagree with, which I think is good).
(I added a small edit after the fact that you may not have seen.)
Ah, indeed. Seems like it’s related to a more broader mismatch on agree/disagree vs. true/false that we are discussing in other threads.
(Preamble: I am sort of hesitant to go too far in this subthread for fear of pushing your apparent strong reaction further. Would it be appropriate to cool down for a while elsewhere before coming back to this? I hope that’s not too intrusive to say, and I hope my attempt below to figure out what’s happening isn’t too intrusively psychoanalytical.)
I would like to gently suggest that the mental motion of not treating disagreement (even when it’s quite vague) as “being kicked”—and learning to do some combination of regulating that feeling and not associating it to begin with—forms, at least for me, a central part of the practical reason for distinguishing discursive quality from truth in the first place. By contrast, a downvote in the approval sense is meant to (but that doesn’t mean “will consistently be treated as”, of course!) potentially be the social nudge side—the negative-reinforcement “it would have been better if you hadn’t posted that” side.
I was initially confused as well as to how the four-pointed star version you suggested elsewhere would handle this, but combining the two, I think I see a possibility, now. Would it be accurate to say that you have difficulty processing what feels like negative reinforcement on one axis when it is not specifically coupled with either confirmatory negative or relieving positive reinforcement on the other, and that your confusion around the two-axis system involves a certain amount of reflexive “when I see a negative on one axis, I feel compelled to figure out which direction it means on the other axis to determine whether I should feel bad”? Because if so, that makes me wonder how many people do that by default.
I think it’s easy for me to parse approval/disapproval, and it’s easy for me to parse assertions-of-falsehood/assertions-of-truth. I think it’s hard for me to parse something like “agree/disagree” which feels set up to motte-bailey between those.
Okay. I think I understand better now, and especially how this relates to the “trust” you mention elsewhere. In other words, something more like: you think/feel that not locking the definition down far enough will lead to lack of common knowledge on interpretation combined with a more pervasive social need to understand the interpretation to synchronize? Or something like: this will have the same flaws as karma, only people will delude themselves that it doesn’t?
Yes to both of your summaries, roughly.
Strange-Loop relevant: this very comment above is one where I went back to “disagree” with myself after Duncan’s reply. What I meant by that is that I originally thought the idea I was stating was likely to be both true and relevant, but now I have changed my mind and think it is not likely to be true, but I don’t think that making the post in the first place was a bad idea with what I knew at the time (and thus I haven’t downvoted myself on the other axis). However, I then remembered that retraction was also an option. I decided to use that too in this case, but I’m not sure that makes full sense here; there’s something about the crossed-out text that gives me a different impression I’m not sure how to unpack right now. Feedback on whether that was a “correct” action or not is welcome.
Disagreement is not necessarily about truth, it’s often about (not) sharing a subjective opinion. In that case resolving it doesn’t make any sense, the things in disagreement can coexist, just as you and the disagreer are different people. The expectation that agreement is (always) about truth is just mistranslation, the meaning is different. Of course falsity/fallaciousness implies disagreement with people who see truth/validity, so it’s some evidence about error if the claims you were making are not subjective (author-referring).
For subjective claims, the alternative to disagreement being comfortable is emotional experience of intolerance, intuitive channeling of conformance-norm-enforcement (whether externally enacted, or self-targeted, or neither).
Right. I’m advocating that we do have a symbol for agreement/disagreement about truth, and leave the subjective stuff in the karma score.
When the comment is about truth, then agreement/disagreement is automatically about truth. There are comments that are not about truth, being about truth is a special case that shouldn’t be in the general interface, especially if it happens to already be the intended special case of this more general thing I’m pointing at.
I definitely don’t think that “When the comment is about truth, then agreement/disagreement is automatically about truth” is a true statement about humans in general, though it might be aspirationally true of LWers?
theyhatedhimbecausehetoldthemthetruth.meme