Pulling together thoughts from a variety of subthreads:
I expect this to meaningfully deter me/create substantial demoralization and bad feelings when I attempt to participate in comment threads, and therefore cause me to do so even less than I currently do.
This impression has been building across all the implementations of the two-factor voting over the past few months.
In particular: the thing I wanted and was excited about from a novel or two-factor voting system was a distinction between what’s overall approved or disapproved (i.e. I like or dislike the addition to the conversation, think it was productive or counterproductive) and what’s true or false (i.e. I endorse the claims or reasoning and think that more people should believe them to be true).
I very much do not believe that “agree or disagree” is a good proxy for that/tracks that. I think that it doesn’t train LWers to distinguish their sense of truth or falsehood from how much their monkey brain wants to signal-boost a given contribution. I don’t think it is going to nudge us toward better discourse and clearer separation of [truth] and [value].
It feels like it’s an active step away from that, and therefore it makes me sad. It’s signal-boosting mob agreement and mob disagreement in a way that feels like more unthinking subjectivity, rather than less.
I think this would not be true if I had faith in the userbase, i.e. in a group composed entirely of Oli, Vaniver, Said, Logan, and Eliezer, I would trust the agreement/disagreement button.
But with LW writ large, I think it’s sort of … halfway pretending to be a signal of truth while secretly just being more-of-the-thing-karma-was-already-doing, i.e. popularity contest made slightly better by the fact that the people judging popularity are trying a little to make actually-good things popular.
(This impression based on scattered assessments of the second vote on various comments over the past few months.)
I very much do not believe that “agree or disagree” is a good proxy for that/tracks that. I think that it doesn’t train[ LWers to distinguish their sense of truth or falsehood from how much their monkey brain wants to signal-boost a given contribution. I don’t think it is going to nudge us toward better discourse and clearer separation of [truth] and [value].
See my other comment. I don’t think agree/disagree is much different from true/false, and am confused about the strength of your reaction here. I personally don’t have a strong preference, and only mildly prefer “agree/disagree” because it is more clearly in the same category as “approve/disapprove”, i.e. an action, instead of a state.
I think the hover-over text needs tweaking anyways. If other people also have a preference for saying something like “Agree: Do you think the content of this comment is true?” and “Disagree: Do you think the content of this comment is false?”, then that seems good to me. Having “approve/disapprove” and “true/false” as the top-level distinction does sure parse as a type error to me (why is one an action, and the other one an adjective?).
I also think we should definitely change the hover for the karma-vote dimension to say “approve” and “disapprove”, instead of “like” and “dislike”, which I think captures the dimensions here better.
Agree: Do you think the content of this comment is true?
Apart from equivocation of words with usefully different meanings, I think it’s less useful to extract truth-dimension than agreement-dimension, since truth-dimension is present less often, doesn’t help with improving approval-dimension, and agreement-dimension becomes truth-dimension for objective claims, so truth-dimension is a special case of the more-useful-for-other-things agreement-dimension.
I think this is false. Subjective disagreement shouldn’t imply disapproval, capturing subjective-disagreement by disapproval rounds it off to disincentivization of non-conformity, which is a problem. Extracting it into a separate dimension solves this karma-problem.
It is less useful for what you want because it’s contextually-more-ambiguous than the truth-verdict. So I think the meaningful disagreement between me and you/habryka(?) might be in which issue is more important (to spend the second-voting-dimension slot on). I think the large quantity of karma-upvoted/agreement-downvoted comments to this post is some evidence for the importance of the idea I’m professing.
To derive from something I said as a secondary part of another comment, possibly more clearly: I think that extracting “social approval that this post was a good idea and should be promoted” while conflating other forms of “agreement” is a better choice of dimensionality reduction than extracting “objective truth of the statements in this post” while conflating other forms of “approval”. Note that the former makes this change kind of a “reverse extraction” where the karma system was meant to be centered around that one element to begin with and now has some noise removed, while the other elements now have a place to be rather than vanishing. The last part of that may center some disapprovals of the new system, along the lines of “amplifying the rest of it into its own number (rather than leaving it as an ambiguous background presence) introduces more noise than is removed by keeping the social approval axis ‘clean’” (which I don’t believe, but I can partly see why other people might believe).
Of Strange Loop relevance: I am treating most of the above beliefs of mine here as having primarily intersubjective truth value, which is similar in a lot of relevant ways to an objective truth value but only contextually interconvertible.
“Agree: Do you think the content of this comment is true? (Or if the comment is about an emotional reaction or belief of the author, does that statement resonate with you?)”
It sure is a mouthful, but it feels like it points towards a coherent cluster.
I think the thing Duncan wants is harder to formulate than this, it has to disallow voting on aspects of the comment that are not about factual claims whose truth is relevant. And since most claims are true, it somehow has to avoid everyone-truth-upvotes-everything default in a way that retains some sort of useful signal instead of deciding the number of upvotes based on truth-unrelated selection effects. I don’t see what this should mean for comments-in-general, carefully explained, and I don’t currently have much hope that it can be operationalized into something more useful than agreement.
I am self-aware about the fact that this might just mean “this isn’t your scene, Duncan; you don’t belong” more than “this group is doing something wrong for this group’s goals and values.”
Like, the complaint here is not necessarily “y’all’re doing it Wrong” with a capital W so much as “y’all’re doing it in a way that seems wrong to me, given what I think ‘wrong’ is,” and there might just be genuine disagreement about wrongness.
But I think “agree/disagree” points people toward yet more of the same social junk that we’re trying to bootstrap out of, in a way that “true/false” does not. It feels like that’s where this went wrong/that’s what makes this seem doomed-from-the-start and makes me really emotionally resistant to it.
I do not trust the aggregated agreement or disagreement of LW writ large to help me see more clearly or be a better reasoner, and I do not expect it to identify and signal-boost truth and good argument for e.g. young promising new users trying to become less wrong.
e.g. a −1 just appeared on the top-level comment in the “agree/disagree” category and it makes me want to take my ball and go home and never come back.
I’m taking that feeling as object, rather than being fully subject to it, but when I anticipate fighting against that feeling every time I leave a comment, I conclude “this is a bad place for me to be.”
EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
EDIT: now at −5, and yes, indeed, it is making me want to LEAVE LESSWRONG.
This means you’re using others’ reactions to define what you are or are not okay with.
I mean, if you think this −1−3 −5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?
And if you think it’s not reflecting something true, doesn’t your reaction highlight a place where your reactions need calibrating?
I’m pretty sure you’re actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.
But this is a place where a collective culture of emotional codependence actively screws with epistemics.
Which is to say, I disagree in a principled way with your sense of “wrongness” here, in the sense you name in your previous comment:
Like, the complaint here is not necessarily “y’all’re doing it Wrong” with a capital W so much as “y’all’re doing it in a way that seems wrong to me, given what I think ‘wrong’ is,” and there might just be genuine disagreement about wrongness.
I think a good truth-tracking culture acknowledges, but doesn’t try to ameliorate, the discomfort you’re naming in the comment I’m replying to.
(Whether LW agrees with me here is another matter entirely! This is just me.)
I mean, if you think this −1−3 −5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?
No, not quite.
There’s a difference (for instance) between knowledge and common knowledge, and there’s a difference (for instance) between animosity and punching.
Or maybe this is what you meant with “actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.”
A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly).
It’s better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement.
Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub.
(It’s worth noting here that there is a mirror to all of this, i.e. there’s the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW’s structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman’s patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).)
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
It is up in the air whether RightnessAndWrongnessAccordingToDuncan is closer to actually right than RightnessAndWrongnessAccordingToTheLWMob.
But it seems to me that the vote button in its current implementation, and evaluated according to the votes coming in, was more likely to be in the non-overlap between those two, and in the LWMob part, which means an asymmetric weapon in the wrong direction.
Sorry, this comment is sort of quickly tossed off; please let me know if it doesn’t make sense.
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you.
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
Abstractly that seems maybe good.
My gut sense is you can’t do that by targeting how coalitions form. That engenders Goodhart drift. You’ve got to do it by making truth easier to notice in some asymmetric way.
I don’t know how to do that.
I agree that this voting system doesn’t address your concern.
It’s unclear to me how big a problem it is though. Maybe it’s huge. I don’t know.
EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
I think other people are saying “the sentences that Duncan says about himself are not true for me” while also saying “I am nevertheless glad that Duncan said it”. This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it’s not the case that Duncan-opinion is false.
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says “I think X”, I want to know when other people would say “I think not X”. I don’t want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.
I interpret “agree/disagree” in this context as literally ‘is this comment true, as far as you can tell, or is it false?’, so when I imagine changing it to “true/false” I don’t imagine it feeling any different to me. (Which also means I’m not personally opposed to such a change. 🤷)
Maybe relevant that I’m used to Arbital’s ‘assign a probability to this claim’ feature. I just tihnk of this as a more coarse-grained, fast version of Arbital’s tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if it’s also downvoted (often some flavor of ‘nooo you’re not fully understanding a thing I was trying to communicate!’), but happy about it if it’s upvoted. Something like:
Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between ‘I like this’ and ‘I think this is true’.
I mostly don’t perceive the disagreevotes as ‘you are flatly telling me to my face that I’m wrong’. Rather, I perceive it more like ‘these people are writing journal entries to themselves saying “Dear Diary, my current belief is X”’, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of ‘this isn’t a response to the post, it’s just a tool for people to keep track of what they think of things’
Maybe that’s the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
I think I can also better understand your perspective (though again, correct me if I’m wrong) if I imagine I’m in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then there’s a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern we’ll often see is ‘you’re being upvoted and disagreed with’, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-people’s-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the ‘Duncan-hounding trolls’ thing is low, it’s maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And it’s doubly maddening to have to worry that third parties might assign unduly low probability to the ‘Duncan-hounding trolls’ thing, and to related scenarios. And that they might prematurely discount Duncan’s view, or be inclined to strawman it, after seeing a −8 or whatever that tells them ‘social reality is that this comment is Wrong’.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I don’t have a crystal ball. But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
Like, I don’t think LW is reliably correct, and I don’t think of ‘people who use LW’ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
… resonated pretty strongly.
But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Yes.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that’s an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that.
More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that’s an important part of group rationality in practice.
I’ve gone into this in more detail elsewhere. Ultimately, the solution I like best is “Upvoting on this axis means ‘I could truthfully say this or something close to it from my own beliefs and experience.’”
I think I experience silent, contentless net-disagreement as very hard to interface with. It doesn’t specify what’s wrong with my comment, it doesn’t tell me what the disagreer’s crux is, it doesn’t give me any handholds or ways-to-resolve-the-disagreement. It’s just a “you’ve-been-kicked” sign sitting on my comment forever.
Whereas “the consensus of LW users asked to evaluate this comment for truth is that it is more false than true” is at least conveying something interesting. It can tell me to, for instance, go add more sources and argument in defense of my claims.
Yeah, I think this is a problem, but I think contentless net-disapproval is substantially worse than that (at least for me, I can imagine it being worse for some people, but overall expect people to strongly prefer contentless net-disagreement to contentless net-disapproval).
Like, I think one outcome of this voting system change is that some contentless net-disapproval gets transformed into contentless net-disagreement, which I think has a substantially better effect on the discourse (especially if combined with high approval, which I think carves out a real place for people who say lots of stuff that others disagree with, which I think is good).
(Preamble: I am sort of hesitant to go too far in this subthread for fear of pushing your apparent strong reaction further. Would it be appropriate to cool down for a while elsewhere before coming back to this? I hope that’s not too intrusive to say, and I hope my attempt below to figure out what’s happening isn’t too intrusively psychoanalytical.)
I would like to gently suggest that the mental motion of not treating disagreement (even when it’s quite vague) as “being kicked”—and learning to do some combination of regulating that feeling and not associating it to begin with—forms, at least for me, a central part of the practical reason for distinguishing discursive quality from truth in the first place. By contrast, a downvote in the approval sense is meant to (but that doesn’t mean “will consistently be treated as”, of course!) potentially be the social nudge side—the negative-reinforcement “it would have been better if you hadn’t posted that” side.
I was initially confused as well as to how the four-pointed star version you suggested elsewhere would handle this, but combining the two, I think I see a possibility, now. Would it be accurate to say that you have difficulty processing what feels like negative reinforcement on one axis when it is not specifically coupled with either confirmatory negative or relieving positive reinforcement on the other, and that your confusion around the two-axis system involves a certain amount of reflexive “when I see a negative on one axis, I feel compelled to figure out which direction it means on the other axis to determine whether I should feel bad”? Because if so, that makes me wonder how many people do that by default.
I think it’s easy for me to parse approval/disapproval, and it’s easy for me to parse assertions-of-falsehood/assertions-of-truth. I think it’s hard for me to parse something like “agree/disagree” which feels set up to motte-bailey between those.
Okay. I think I understand better now, and especially how this relates to the “trust” you mention elsewhere. In other words, something more like: you think/feel that not locking the definition down far enough will lead to lack of common knowledge on interpretation combined with a more pervasive social need to understand the interpretation to synchronize? Or something like: this will have the same flaws as karma, only people will delude themselves that it doesn’t?
Strange-Loop relevant: this very comment above is one where I went back to “disagree” with myself after Duncan’s reply. What I meant by that is that I originally thought the idea I was stating was likely to be both true and relevant, but now I have changed my mind and think it is not likely to be true, but I don’t think that making the post in the first place was a bad idea with what I knew at the time (and thus I haven’t downvoted myself on the other axis). However, I then remembered that retraction was also an option. I decided to use that too in this case, but I’m not sure that makes full sense here; there’s something about the crossed-out text that gives me a different impression I’m not sure how to unpack right now. Feedback on whether that was a “correct” action or not is welcome.
Disagreement is not necessarily about truth, it’s often about (not) sharing a subjective opinion. In that case resolving it doesn’t make any sense, the things in disagreement can coexist, just as you and the disagreer are different people. The expectation that agreement is (always) about truth is just mistranslation, the meaning is different. Of course falsity/fallaciousness implies disagreement with people who see truth/validity, so it’s some evidence about error if the claims you were making are not subjective (author-referring).
contentless net-disagreement as very hard to interface with
For subjective claims, the alternative to disagreement being comfortable is emotional experience of intolerance, intuitive channeling of conformance-norm-enforcement (whether externally enacted, or self-targeted, or neither).
When the comment is about truth, then agreement/disagreement is automatically about truth. There are comments that are not about truth, being about truth is a special case that shouldn’t be in the general interface, especially if it happens to already be the intended special case of this more general thing I’m pointing at.
I definitely don’t think that “When the comment is about truth, then agreement/disagreement is automatically about truth” is a true statement about humans in general, though it might be aspirationally true of LWers?
One particularly useful thing I think this idea points in the direction of (though I think Duncan would say that this is not enough and does nothing to fix his central problem with the new system) is that the ability to default-hide each axis separately would be a good user-facing option. If a user believes they would be badly influenced by seeing the aggregated approval and/or agreement numbers, they can effectively “spoiler” themselves from the aggregate opinion and either never reveal it or only reveal it after being satisfied with their own thought processes.
You would prefer, if I am understanding you right (I remark explicitly that of course I might not be), a world where the thing people do besides approving/disapproving is separating out specific factual claims and assessing whether they consider those true or false. I think that (1) labelling the buttons agree/disagree will not get you that, (2) there are important cases in which something else, closer to agree/disagree, is more valuable information, (3) reasonable users will typically use agree/disagree in the way you would like them to use true/false except in those cases, and (4) unreasonable users would likely use true/false in the exact same unhelpful ways as they would use agree/disagree.
Taking those somewhat out of order:
On #2: as has been mentioned elsewhere in the thread, for comments that say things like “I think X” or “I like Y” a strict true/false evaluation is answering the question “does the LW readership agree that Duncan thinks X?” whereas an agree/disagree evaluation is answering the question “does the LW readership also think X or like Y?”, and it seems obvious to me that the latter is much more likely to be useful than the former.
On #4: some people don’t think very clearly, or aren’t concerned with fairness, or have a grudge against a particular other user, or are politically mindkilled, or whatever, and I completely agree with you that those people are liable to abuse an agree/disagree button as (in effect) another version of aspprove/disapprove with extra pretentions. But I would expect those people to do the same with true/false buttons. By definition, they are not trying hard to use the system in a maximally helpful way, attending to subtle distinctions of meaning.
Hence #1: labelling the buttons true/false will not in fact make those people use them the way you would like them to be used.
On #3: Users who are thinking clearly, trying to be fair, etc., will I think typically interpret agree/disagree buttons as asking whether they agree with the factual content of the text in question. There will of course be exceptions, but I think they will mostly be situations like the ones in #2 where pure factual-content-evaluation is (at least in my view) the Wrong Thing.
(Another class of situations where true/false and agree/disagree might diverge: a comment that both asserts facts and makes an argument. Maybe true/false is specifically about the facts and agree/disagree is about the argument too. My expectation would be that when the argument rather than the factual claims is the main point—e.g., because the factual claims are uncontroversial—agree/disagree will be applied to the argument, and otherwise they will be applied to the factual claims. That seems OK to me. You might disagree.)
I think a single vote system baaasically boils down to approve/disapprove already. People do some weighted sum of how true and how useful/productive they find a comment is, and vote accordingly.
I think a single vote already conveys a bunch of information about agreement. Very very few people upvote things they disagree with, even on LW, and most of the time they do, they leave a disambiguating comment (I’ve seen Rob and philh and Daystar do this, for instance).
So making the second vote “agree/disagree” feels like adding a redundant feature; the single vote was already highly correlated with agree/disagree. (Claim.)
What I want, and have bid for every single time (with those bids basically being ignored every time, as far as I can tell) is a distinction between “this was a good contribution” and “I endorse the claims or reasoning therein.”
The thing I would find most useful is the ability to separate things out into “[More like this] and also [endorsed as true],” “[More like this] but [sketchy on truth],” “[Less like this] though [endorsed as true],” and “[Less like this] and [sketchy on truth].”
I think that’s a fascinatingly different breakdown than the usual approve/disapprove that karma represents, and would make LessWrong discussions a more interesting and useful place.
I don’t want these as two separate buttons; I have argued vociferously each time that there should be a single click that gives you two bits.
Given a two-click solution, though, I think that there are better/more interesting questions to pose to the user than like-versus-agree, especially because (as I’ve mentioned each time) I don’t trust the LW userbase to meaningfully distinguish those two. I trust some users to do so most of the time, but that’s worse than nothing when it comes to interpreting e.g. a contextless −5 on one of my posts, which means something very different if it was put there by users I trust than by users I do not trust.
On your #2, the solution I’ve endorsed in a few places is “I could truthfully say this or something close to it from my own beliefs and experience,” which captures both truth and agreement very nicely.
On your #4, this button is no worse than the current implementation.
Basically, I would like us to be setting out to do a useful and reasonable thing in the first place. I don’t think “agree/disagree” is a useful or reasonable thing; I think it is adding a new motte-and-bailey to the site. I think the “I could truthfully say this myself” is useful and reasonable and hits the goods that e.g. Oli wants, while avoiding the cost that I see (that others are reluctant to credit as existing or being important, imo because they are colorblind).
Very very few people upvote things they disagree with, even on LW, and most of the time they do, they leave a disambiguating comment (I’ve seen Rob and philh and Daystar do this, for instance).
I was surprised by this because I don’t remember doing it. After a quick look:
I didn’t find any instances where I said I upvoted something I disagreed with.
But I did find two comments that I upvoted (without saying so) despite disagreeing, because I’d asked what someone thought and they’d answered and I didn’t want to punish that.
I feel like I have more often given “verbal upvotes” for things I disagree with, things like “I’m glad you said this but”, without actually voting? I don’t vote very much for whatever reason.
I think a single vote already conveys a bunch of information about agreement. Very very few people upvote things they disagree with, even on LW, and most of the time they do, they leave a disambiguating comment (I’ve seen Rob and philh and Daystar do this, for instance)...So making the second vote “agree/disagree” feels like adding a redundant feature; the single vote was already highly correlated with agree/disagree. (Claim.)
I am not very knowledgeable about a lot of things people post about on LW, so my median upvote is on a post or comment which is thought-provoking but which I don’t have a strong opinion about. I don’t know if I am typical, but I bet there are at least many people like me.
In a two-factor voting system, what happens if I’m not sure if I agree or disagree, e.g. because I am still thinking about it?
If agree means “I endorse the claims or reasoning and think that more people should believe them to be true”, I would probably default to no (I would endorse only if I’m pretty sure about something, and not endorsing doesn’t mean I think it’s wrong), so it’s more like +1/0 voting. But if agree means “I think this is true”, disagree would then mean saying “I think this is false”, i.e. more like +1/-1 voting, so I would probably abstain?
Pulling together thoughts from a variety of subthreads:
I expect this to meaningfully deter me/create substantial demoralization and bad feelings when I attempt to participate in comment threads, and therefore cause me to do so even less than I currently do.
This impression has been building across all the implementations of the two-factor voting over the past few months.
In particular: the thing I wanted and was excited about from a novel or two-factor voting system was a distinction between what’s overall approved or disapproved (i.e. I like or dislike the addition to the conversation, think it was productive or counterproductive) and what’s true or false (i.e. I endorse the claims or reasoning and think that more people should believe them to be true).
I very much do not believe that “agree or disagree” is a good proxy for that/tracks that. I think that it doesn’t train LWers to distinguish their sense of truth or falsehood from how much their monkey brain wants to signal-boost a given contribution. I don’t think it is going to nudge us toward better discourse and clearer separation of [truth] and [value].
It feels like it’s an active step away from that, and therefore it makes me sad. It’s signal-boosting mob agreement and mob disagreement in a way that feels like more unthinking subjectivity, rather than less.
I think this would not be true if I had faith in the userbase, i.e. in a group composed entirely of Oli, Vaniver, Said, Logan, and Eliezer, I would trust the agreement/disagreement button.
But with LW writ large, I think it’s sort of … halfway pretending to be a signal of truth while secretly just being more-of-the-thing-karma-was-already-doing, i.e. popularity contest made slightly better by the fact that the people judging popularity are trying a little to make actually-good things popular.
(This impression based on scattered assessments of the second vote on various comments over the past few months.)
See my other comment. I don’t think agree/disagree is much different from true/false, and am confused about the strength of your reaction here. I personally don’t have a strong preference, and only mildly prefer “agree/disagree” because it is more clearly in the same category as “approve/disapprove”, i.e. an action, instead of a state.
I think the hover-over text needs tweaking anyways. If other people also have a preference for saying something like “Agree: Do you think the content of this comment is true?” and “Disagree: Do you think the content of this comment is false?”, then that seems good to me. Having “approve/disapprove” and “true/false” as the top-level distinction does sure parse as a type error to me (why is one an action, and the other one an adjective?).
I also think we should definitely change the hover for the karma-vote dimension to say “approve” and “disapprove”, instead of “like” and “dislike”, which I think captures the dimensions here better.
Apart from equivocation of words with usefully different meanings, I think it’s less useful to extract truth-dimension than agreement-dimension, since truth-dimension is present less often, doesn’t help with improving approval-dimension, and agreement-dimension becomes truth-dimension for objective claims, so truth-dimension is a special case of the more-useful-for-other-things agreement-dimension.
I think the karma dimension already captures the-parts-of-the-agreement-dimension-that-aren’t-truth.
I think this is false. Subjective disagreement shouldn’t imply disapproval, capturing subjective-disagreement by disapproval rounds it off to disincentivization of non-conformity, which is a problem. Extracting it into a separate dimension solves this karma-problem.
It is less useful for what you want because it’s contextually-more-ambiguous than the truth-verdict. So I think the meaningful disagreement between me and you/habryka(?) might be in which issue is more important (to spend the second-voting-dimension slot on). I think the large quantity of karma-upvoted/agreement-downvoted comments to this post is some evidence for the importance of the idea I’m professing.
To derive from something I said as a secondary part of another comment, possibly more clearly: I think that extracting “social approval that this post was a good idea and should be promoted” while conflating other forms of “agreement” is a better choice of dimensionality reduction than extracting “objective truth of the statements in this post” while conflating other forms of “approval”. Note that the former makes this change kind of a “reverse extraction” where the karma system was meant to be centered around that one element to begin with and now has some noise removed, while the other elements now have a place to be rather than vanishing. The last part of that may center some disapprovals of the new system, along the lines of “amplifying the rest of it into its own number (rather than leaving it as an ambiguous background presence) introduces more noise than is removed by keeping the social approval axis ‘clean’” (which I don’t believe, but I can partly see why other people might believe).
Of Strange Loop relevance: I am treating most of the above beliefs of mine here as having primarily intersubjective truth value, which is similar in a lot of relevant ways to an objective truth value but only contextually interconvertible.
Hmm, what about language like
“Agree: Do you think the content of this comment is true? (Or if the comment is about an emotional reaction or belief of the author, does that statement resonate with you?)”
It sure is a mouthful, but it feels like it points towards a coherent cluster.
I think the thing Duncan wants is harder to formulate than this, it has to disallow voting on aspects of the comment that are not about factual claims whose truth is relevant. And since most claims are true, it somehow has to avoid everyone-truth-upvotes-everything default in a way that retains some sort of useful signal instead of deciding the number of upvotes based on truth-unrelated selection effects. I don’t see what this should mean for comments-in-general, carefully explained, and I don’t currently have much hope that it can be operationalized into something more useful than agreement.
I am self-aware about the fact that this might just mean “this isn’t your scene, Duncan; you don’t belong” more than “this group is doing something wrong for this group’s goals and values.”
Like, the complaint here is not necessarily “y’all’re doing it Wrong” with a capital W so much as “y’all’re doing it in a way that seems wrong to me, given what I think ‘wrong’ is,” and there might just be genuine disagreement about wrongness.
But I think “agree/disagree” points people toward yet more of the same social junk that we’re trying to bootstrap out of, in a way that “true/false” does not. It feels like that’s where this went wrong/that’s what makes this seem doomed-from-the-start and makes me really emotionally resistant to it.
I do not trust the aggregated agreement or disagreement of LW writ large to help me see more clearly or be a better reasoner, and I do not expect it to identify and signal-boost truth and good argument for e.g. young promising new users trying to become less wrong.
e.g. a −1 just appeared on the top-level comment in the “agree/disagree” category and it makes me want to take my ball and go home and never come back.
I’m taking that feeling as object, rather than being fully subject to it, but when I anticipate fighting against that feeling every time I leave a comment, I conclude “this is a bad place for me to be.”
EDIT: it’s now −3. Is the takeaway “this comment is substantially more false than true”?
EDIT: now at −5, and yes, indeed, it is making me want to LEAVE LESSWRONG.
This means you’re using others’ reactions to define what you are or are not okay with.
I mean, if you think this
−1−3−5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?And if you think it’s not reflecting something true, doesn’t your reaction highlight a place where your reactions need calibrating?
I’m pretty sure you’re actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.
But this is a place where a collective culture of emotional codependence actively screws with epistemics.
Which is to say, I disagree in a principled way with your sense of “wrongness” here, in the sense you name in your previous comment:
I think a good truth-tracking culture acknowledges, but doesn’t try to ameliorate, the discomfort you’re naming in the comment I’m replying to.
(Whether LW agrees with me here is another matter entirely! This is just me.)
No, not quite.
There’s a difference (for instance) between knowledge and common knowledge, and there’s a difference (for instance) between animosity and punching.
Or maybe this is what you meant with “actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.”
A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly).
It’s better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement.
Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub.
(It’s worth noting here that there is a mirror to all of this, i.e. there’s the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW’s structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman’s patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).)
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
It is up in the air whether RightnessAndWrongnessAccordingToDuncan is closer to actually right than RightnessAndWrongnessAccordingToTheLWMob.
But it seems to me that the vote button in its current implementation, and evaluated according to the votes coming in, was more likely to be in the non-overlap between those two, and in the LWMob part, which means an asymmetric weapon in the wrong direction.
Sorry, this comment is sort of quickly tossed off; please let me know if it doesn’t make sense.
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you.
Abstractly that seems maybe good.
My gut sense is you can’t do that by targeting how coalitions form. That engenders Goodhart drift. You’ve got to do it by making truth easier to notice in some asymmetric way.
I don’t know how to do that.
I agree that this voting system doesn’t address your concern.
It’s unclear to me how big a problem it is though. Maybe it’s huge. I don’t know.
I think other people are saying “the sentences that Duncan says about himself are not true for me” while also saying “I am nevertheless glad that Duncan said it”. This seems like great information for me, and is like, quite important for me getting information from this thread about how people want us to change the feature.
And if you change agreement-dimension to truth-dimension, this data will no longer be possible to express in terms of voting, because it’s not the case that Duncan-opinion is false.
The distinction between “not true for me, the reader” and “not true at all” is not clear.
And that is the distinction between “agree/disagree” and “true/false.”
Hmm, I do sure find the first one more helpful when people talk about themselves. Like, if someone says “I think X”, I want to know when other people would say “I think not X”. I don’t want people to tell me if they really think whether the OP accurately reported on their own beliefs and really believes X.
Yeah. Both are useful, and each is more useful in some context or other. I just want it to be relatively unambiguous which is happening—I really felt like I was being told I was wrong in my top-level comment. That was the emotional valence.
I’m sad this is your experience!
I interpret “agree/disagree” in this context as literally ‘is this comment true, as far as you can tell, or is it false?’, so when I imagine changing it to “true/false” I don’t imagine it feeling any different to me. (Which also means I’m not personally opposed to such a change. 🤷)
Maybe relevant that I’m used to Arbital’s ‘assign a probability to this claim’ feature. I just tihnk of this as a more coarse-grained, fast version of Arbital’s tool for assigning probabilities to claims.
When I see disagree-votes on my comments, I think I typically feel bad about it if it’s also downvoted (often some flavor of ‘nooo you’re not fully understanding a thing I was trying to communicate!’), but happy about it if it’s upvoted. Something like:
Amusement at the upvote/agreevote disparity, and warm feelings toward LW that it was able to mentally separate its approval for the comment from how much probability it assigns to the comment being true.
Pride in LW for being one of the rare places on the Internet that cares about the distinction between ‘I like this’ and ‘I think this is true’.
I mostly don’t perceive the disagreevotes as ‘you are flatly telling me to my face that I’m wrong’. Rather, I perceive it more like ‘these people are writing journal entries to themselves saying “Dear Diary, my current belief is X”’, and then LW kindly records a bunch of these diary entries in a single centralized location so we can get cool polling data about where people are at. It feels to me like a side-note.
Possibly I was primed to interpret things this way by Arbital? On Arbital, probability assignments get their own page you can click through to; and Arbital pages are timeless, so people often go visit a page and vote on years after the post was originally created, with (I think?) no expectation the post author will ever see that they voted. And their names and specific probabilities are attached. All of which creates a sense of ‘this isn’t a response to the post, it’s just a tool for people to keep track of what they think of things’
Maybe that’s the crux? I might be totally wrong, but I imagine you seeing a disagree-vote and reading it instead as ‘a normal downvote, but with a false pretension of being somehow unusually Epistemic and Virtuous’, or as an attempt to manipulate social reality and say ‘we, the elite members of the Community of Rationality, heirs to the throne of LessWrong, hereby decree (from behind our veil of anonymity) (with zero accountability or argumentation) that your view is False; thus do we bindingly affirm the Consensus Position of this site’.
I think I can also better understand your perspective (though again, correct me if I’m wrong) if I imagine I’m in hostile territory surrounded by enemies.
Like, maybe you imagine five people stalking you around LW downvoting-and-disagreevoting on everything your post, unfairly strawmanning you, etc.; and then there’s a separate population of LWers who are more fair-minded and slower to rush to judgment.
But if the latter group of people tends to upvote you and humbly abstain from (dis)agreevoting, then the pattern we’ll often see is ‘you’re being upvoted and disagreed with’, as though the latter fair-minded population were doing both the upvoting and the disagreevoting. (Or as though the site as a whole were virtuously doing the support-and-defend-people’s-right-to-say-unpopular-things thing.) Which is in fact wildly different from a world where the fair-minded people are neutral or positive about the truth-value of your comments, while the Duncan-hounding trolls.
And even if the probability of the ‘Duncan-hounding trolls’ thing is low, it’s maddening to have so much uncertainty about which of those scenarios (or other scenarios) is occurring. And it’s doubly maddening to have to worry that third parties might assign unduly low probability to the ‘Duncan-hounding trolls’ thing, and to related scenarios. And that they might prematurely discount Duncan’s view, or be inclined to strawman it, after seeing a −8 or whatever that tells them ‘social reality is that this comment is Wrong’.
Again, tell me if this is all totally off-base. This is me story-telling so you can correct my models; I don’t have a crystal ball. But that’s an example of a scenario where I’d feel way more anxious about the new system, and where I’d feel very happy to have a way of telling how many people are agreeing and upvoting, versus agreeing and downvoting, versus disagreeing and upvoting, versus disagreeing and downvoting.
Plausibly a big part of why we feel differently about the system is that you’ve had lots of negative experiences on LW and don’t trust the consensus here, while I feel more OK about it?
Like, I don’t think LW is reliably correct, and I don’t think of ‘people who use LW’ as the great-at-epistemics core of the rationalescent community. But I feel fine about the site, and able to advocate my views, be heard about, persuade people, etc. If your experience is instead one of constantly having to struggle to be understood at all, fighting your way to not be strawmanned, having a minority position that’s constantly under siege, etc., then I could imagine having a totally different experience.
It is not totally off-base; these hypotheses above plus my reply to Val pretty much cover the reaction.
… resonated pretty strongly.
Yes.
Yes. In particular, I feel I have been, not just misunderstood, but something-like attacked or willfully misinterpreted, many times, and usually I am wanting someone, anyone, to come to my defense, and I only get that defense perhaps one such time in three.
Worth noting that I was on board with the def of approve/disapprove being “I could truthfully say this or something close to it from my own beliefs and experience.”
It seems to me (and really, this doubles as a general comment on the pre-existing upvote/downvote system, and almost all variants of the UI for this one, etc.) that… a big part of the problem with a system like this, is that… “what people take to be the meaning of a vote (of any kind and in any direction)” is not something that you (as the hypothetical system’s designer) can control, or determine, or hold stable, or predict, etc.
Indeed it’s not only possible, but likely, that:
different people will interpret votes differently;
people who cast the votes will interpret them differently from people who use the votes as readers;
there will be difficult-to-predict patterns in which people interpret votes how;
how people interpret votes, and what patterns there are in this, will drift over time;
how people think about the meaning of the votes (when explicitly thinking about them) differs from how people’s usage of the votes (from either end) maps to their cognitive and affective states (i.e., people think they think about votes one way, but they actually think about votes another way);
… etc., etc.
So, to be frank, I think that any such voting system is doomed to be useless for measuring anything more subtle or nuanced than the barest emotivism (“boo”/“yay”), simply because it’s not possible to consistently and with predictable consequences dictate an interpretation for the votes, to be reliably and stably adhered to by all users of the site.
If true, that would imply an even higher potential value of meta-filtering (users can choose which other users’s feedback they want to modulate their experience).
I don’t think this follows… after all, once you’re whitelisting a relatively small set of users you want to hear from, why not just get those users’ comments, and skip the voting?
(And if you’re talking about a large set of “preferred respondents”, then… I’m not sure how this could be managed, in a practical sense?)
That’s why it’s a hard problem. The idea would be to get leverage by letting you say “I trust this user’s judgement, including about whose judgement to trust”. Then you use something like (personalized) PageRank / eigenmorality https://scottaaronson.blog/?p=1820 to get useful information despite the circularity of “trusting who to trust about who to trust about …”, and which leverages all the users’s ratings of trust.
I agree, but I find something valuable about, like, unambiguous labels anyway?
Like it’s easier for me to metabolize “fine, these people are using the button ‘wrong’ according to the explicit request made by the site” somehow, than it is to metabolize the confusingly ambiguous open-ended “agree/disagree” which, from comments all throughout this post, clearly means like six different clusters of Thing.
Did you mean “confusingly ambiguous”? If not, then could you explain that bit?
I did mean confusingly ambiguous, which is an ironic typo. Thanks.
I think we should be in the business of not setting up brand-new motte-and-baileys, and enshrining them in site architecture.
Yes, I certainly agree with this.
(I do wonder whether the lack of agreement on the unwisdom of setting up new motte-and-baileys comes from the lack of agreement that the existing things are also motte-and-baileys… or something like them, anyway—is there even an “official” meaning of the karma vote buttons? Probably there is, but it’s not well-known enough to even be a “motte”, it seems to me… well, anyhow, as I said—maybe some folks think that the vote buttons are good and work well and convey useful info, and accordingly they also think that the agreement vote buttons will do likewise?)
I think an expansion of that subproblem is that “agreement” is determined in more contexts and modalities depending on the context of the comment. Having only one axis for it means the context can be chosen implicitly, which (to my mind) sort of happens anyway. Modes of agreement include truth in the objective sense but also observational (we see the same thing, not quite the same as what model-belief that generates), emotional (we feel the same response), axiological (we think the same actions are good), and salience-based (we both think this model is relevant—this is the one of the cases where fuzziness versus the approval axis might come most into play). In my experience it seems reasonably clear for most comments which axis is “primary” (and I would just avoid indicating/interpreting on the “agreement” axis in case of ambiguity), but maybe that’s an illusion? And separating all of those out would be a much more radical departure from a single-axis karma system, and impose even more complexity (and maybe rigidity?), but it might be worth considering what other ideas are around that.
More narrowly, I think having only the “objective truth” axis as the other axis might be good in some domains but fails badly in a more tangled conversation, and especially fails badly while partial models and observations are being thrown around, and that’s an important part of group rationality in practice.
If the labels were “true/false”, wouldn’t it still be unclear when people meant “not true for me, the reader” and when they meant “not true at all”?
I’ve gone into this in more detail elsewhere. Ultimately, the solution I like best is “Upvoting on this axis means ‘I could truthfully say this or something close to it from my own beliefs and experience.’”
I think I experience silent, contentless net-disagreement as very hard to interface with. It doesn’t specify what’s wrong with my comment, it doesn’t tell me what the disagreer’s crux is, it doesn’t give me any handholds or ways-to-resolve-the-disagreement. It’s just a “you’ve-been-kicked” sign sitting on my comment forever.
Whereas “the consensus of LW users asked to evaluate this comment for truth is that it is more false than true” is at least conveying something interesting. It can tell me to, for instance, go add more sources and argument in defense of my claims.
Yeah, I think this is a problem, but I think contentless net-disapproval is substantially worse than that (at least for me, I can imagine it being worse for some people, but overall expect people to strongly prefer contentless net-disagreement to contentless net-disapproval).
Like, I think one outcome of this voting system change is that some contentless net-disapproval gets transformed into contentless net-disagreement, which I think has a substantially better effect on the discourse (especially if combined with high approval, which I think carves out a real place for people who say lots of stuff that others disagree with, which I think is good).
(I added a small edit after the fact that you may not have seen.)
Ah, indeed. Seems like it’s related to a more broader mismatch on agree/disagree vs. true/false that we are discussing in other threads.
(Preamble: I am sort of hesitant to go too far in this subthread for fear of pushing your apparent strong reaction further. Would it be appropriate to cool down for a while elsewhere before coming back to this? I hope that’s not too intrusive to say, and I hope my attempt below to figure out what’s happening isn’t too intrusively psychoanalytical.)
I would like to gently suggest that the mental motion of not treating disagreement (even when it’s quite vague) as “being kicked”—and learning to do some combination of regulating that feeling and not associating it to begin with—forms, at least for me, a central part of the practical reason for distinguishing discursive quality from truth in the first place. By contrast, a downvote in the approval sense is meant to (but that doesn’t mean “will consistently be treated as”, of course!) potentially be the social nudge side—the negative-reinforcement “it would have been better if you hadn’t posted that” side.
I was initially confused as well as to how the four-pointed star version you suggested elsewhere would handle this, but combining the two, I think I see a possibility, now. Would it be accurate to say that you have difficulty processing what feels like negative reinforcement on one axis when it is not specifically coupled with either confirmatory negative or relieving positive reinforcement on the other, and that your confusion around the two-axis system involves a certain amount of reflexive “when I see a negative on one axis, I feel compelled to figure out which direction it means on the other axis to determine whether I should feel bad”? Because if so, that makes me wonder how many people do that by default.
I think it’s easy for me to parse approval/disapproval, and it’s easy for me to parse assertions-of-falsehood/assertions-of-truth. I think it’s hard for me to parse something like “agree/disagree” which feels set up to motte-bailey between those.
Okay. I think I understand better now, and especially how this relates to the “trust” you mention elsewhere. In other words, something more like: you think/feel that not locking the definition down far enough will lead to lack of common knowledge on interpretation combined with a more pervasive social need to understand the interpretation to synchronize? Or something like: this will have the same flaws as karma, only people will delude themselves that it doesn’t?
Yes to both of your summaries, roughly.
Strange-Loop relevant: this very comment above is one where I went back to “disagree” with myself after Duncan’s reply. What I meant by that is that I originally thought the idea I was stating was likely to be both true and relevant, but now I have changed my mind and think it is not likely to be true, but I don’t think that making the post in the first place was a bad idea with what I knew at the time (and thus I haven’t downvoted myself on the other axis). However, I then remembered that retraction was also an option. I decided to use that too in this case, but I’m not sure that makes full sense here; there’s something about the crossed-out text that gives me a different impression I’m not sure how to unpack right now. Feedback on whether that was a “correct” action or not is welcome.
Disagreement is not necessarily about truth, it’s often about (not) sharing a subjective opinion. In that case resolving it doesn’t make any sense, the things in disagreement can coexist, just as you and the disagreer are different people. The expectation that agreement is (always) about truth is just mistranslation, the meaning is different. Of course falsity/fallaciousness implies disagreement with people who see truth/validity, so it’s some evidence about error if the claims you were making are not subjective (author-referring).
For subjective claims, the alternative to disagreement being comfortable is emotional experience of intolerance, intuitive channeling of conformance-norm-enforcement (whether externally enacted, or self-targeted, or neither).
Right. I’m advocating that we do have a symbol for agreement/disagreement about truth, and leave the subjective stuff in the karma score.
When the comment is about truth, then agreement/disagreement is automatically about truth. There are comments that are not about truth, being about truth is a special case that shouldn’t be in the general interface, especially if it happens to already be the intended special case of this more general thing I’m pointing at.
I definitely don’t think that “When the comment is about truth, then agreement/disagreement is automatically about truth” is a true statement about humans in general, though it might be aspirationally true of LWers?
theyhatedhimbecausehetoldthemthetruth.meme
One particularly useful thing I think this idea points in the direction of (though I think Duncan would say that this is not enough and does nothing to fix his central problem with the new system) is that the ability to default-hide each axis separately would be a good user-facing option. If a user believes they would be badly influenced by seeing the aggregated approval and/or agreement numbers, they can effectively “spoiler” themselves from the aggregate opinion and either never reveal it or only reveal it after being satisfied with their own thought processes.
You would prefer, if I am understanding you right (I remark explicitly that of course I might not be), a world where the thing people do besides approving/disapproving is separating out specific factual claims and assessing whether they consider those true or false. I think that (1) labelling the buttons agree/disagree will not get you that, (2) there are important cases in which something else, closer to agree/disagree, is more valuable information, (3) reasonable users will typically use agree/disagree in the way you would like them to use true/false except in those cases, and (4) unreasonable users would likely use true/false in the exact same unhelpful ways as they would use agree/disagree.
Taking those somewhat out of order:
On #2: as has been mentioned elsewhere in the thread, for comments that say things like “I think X” or “I like Y” a strict true/false evaluation is answering the question “does the LW readership agree that Duncan thinks X?” whereas an agree/disagree evaluation is answering the question “does the LW readership also think X or like Y?”, and it seems obvious to me that the latter is much more likely to be useful than the former.
On #4: some people don’t think very clearly, or aren’t concerned with fairness, or have a grudge against a particular other user, or are politically mindkilled, or whatever, and I completely agree with you that those people are liable to abuse an agree/disagree button as (in effect) another version of aspprove/disapprove with extra pretentions. But I would expect those people to do the same with true/false buttons. By definition, they are not trying hard to use the system in a maximally helpful way, attending to subtle distinctions of meaning.
Hence #1: labelling the buttons true/false will not in fact make those people use them the way you would like them to be used.
On #3: Users who are thinking clearly, trying to be fair, etc., will I think typically interpret agree/disagree buttons as asking whether they agree with the factual content of the text in question. There will of course be exceptions, but I think they will mostly be situations like the ones in #2 where pure factual-content-evaluation is (at least in my view) the Wrong Thing.
(Another class of situations where true/false and agree/disagree might diverge: a comment that both asserts facts and makes an argument. Maybe true/false is specifically about the facts and agree/disagree is about the argument too. My expectation would be that when the argument rather than the factual claims is the main point—e.g., because the factual claims are uncontroversial—agree/disagree will be applied to the argument, and otherwise they will be applied to the factual claims. That seems OK to me. You might disagree.)
I think a single vote system baaasically boils down to approve/disapprove already. People do some weighted sum of how true and how useful/productive they find a comment is, and vote accordingly.
I think a single vote already conveys a bunch of information about agreement. Very very few people upvote things they disagree with, even on LW, and most of the time they do, they leave a disambiguating comment (I’ve seen Rob and philh and Daystar do this, for instance).
So making the second vote “agree/disagree” feels like adding a redundant feature; the single vote was already highly correlated with agree/disagree. (Claim.)
What I want, and have bid for every single time (with those bids basically being ignored every time, as far as I can tell) is a distinction between “this was a good contribution” and “I endorse the claims or reasoning therein.”
The thing I would find most useful is the ability to separate things out into “[More like this] and also [endorsed as true],” “[More like this] but [sketchy on truth],” “[Less like this] though [endorsed as true],” and “[Less like this] and [sketchy on truth].”
I think that’s a fascinatingly different breakdown than the usual approve/disapprove that karma represents, and would make LessWrong discussions a more interesting and useful place.
I don’t want these as two separate buttons; I have argued vociferously each time that there should be a single click that gives you two bits.
Given a two-click solution, though, I think that there are better/more interesting questions to pose to the user than like-versus-agree, especially because (as I’ve mentioned each time) I don’t trust the LW userbase to meaningfully distinguish those two. I trust some users to do so most of the time, but that’s worse than nothing when it comes to interpreting e.g. a contextless −5 on one of my posts, which means something very different if it was put there by users I trust than by users I do not trust.
On your #2, the solution I’ve endorsed in a few places is “I could truthfully say this or something close to it from my own beliefs and experience,” which captures both truth and agreement very nicely.
On your #4, this button is no worse than the current implementation.
Basically, I would like us to be setting out to do a useful and reasonable thing in the first place. I don’t think “agree/disagree” is a useful or reasonable thing; I think it is adding a new motte-and-bailey to the site. I think the “I could truthfully say this myself” is useful and reasonable and hits the goods that e.g. Oli wants, while avoiding the cost that I see (that others are reluctant to credit as existing or being important, imo because they are colorblind).
I was surprised by this because I don’t remember doing it. After a quick look:
I didn’t find any instances where I said I upvoted something I disagreed with.
But I did find two comments that I upvoted (without saying so) despite disagreeing, because I’d asked what someone thought and they’d answered and I didn’t want to punish that.
I feel like I have more often given “verbal upvotes” for things I disagree with, things like “I’m glad you said this but”, without actually voting? I don’t vote very much for whatever reason.
I must’ve swapped in a memory of some other LWer I’ve been repeatedly grateful for at various points.
<3
I am not very knowledgeable about a lot of things people post about on LW, so my median upvote is on a post or comment which is thought-provoking but which I don’t have a strong opinion about. I don’t know if I am typical, but I bet there are at least many people like me.
In a two-factor voting system, what happens if I’m not sure if I agree or disagree, e.g. because I am still thinking about it?
If agree means “I endorse the claims or reasoning and think that more people should believe them to be true”, I would probably default to no (I would endorse only if I’m pretty sure about something, and not endorsing doesn’t mean I think it’s wrong), so it’s more like +1/0 voting. But if agree means “I think this is true”, disagree would then mean saying “I think this is false”, i.e. more like +1/-1 voting, so I would probably abstain?
Yeah, I think if you’re torn you just don’t vote yet.