Strong Votes [Update: Deployed]
Update: This has now been pushed to the live site. For now, on Desktop, strong upvotes require click-and-hold. Mobile users just tap multiple times. This is most likely a temporary solution as we get some feedback about how the respective modes work under realistic conditions.
Over on our dev-site, lessestwrong.com, we have a fairly major feature-branch: Recalibrated voting power, and Strong Upvotes/Downvotes.
tl;dr – Normal votes on the test server range in power from 1 to 3 (depending on your karma). You have the option of holding down the up/downvote button for a strong vote, which ranges in power from 1 to 15.
We’re looking for feedback on the UI, and some implementation details.
Flow vs Stock
This post by Jameson Quinn notes that there’s two common reasons to upvote or downvote things. This is similar to my own schema:
Conversational Flow – When you like (or dislike) the effect a comment has on a conversation, and you want to give the author a metaphorical smile of appreciation, or awkward silence/stare.
“Ah, good point” (+)
“Hmm, this gives me something to think about” (+)
“This comment cited sources, which is rare. I want to reward that.” (+)
“This was clever/funny.” (+)
“I think this post contains an error.” (–)
“This comment is technically fine but annoying to read” (–)
“I don’t think the author is being very charitable here” (–)
Some combination of the above (upvote or downvote, depending)
Signifying Importance – When you think other people should go out of their way to read something (or, definitely should not). Ideally, posts and comments that contribute to the longterm stock of value that LessWrong is accumulating.
“I learned something new and useful” (++)
“The argumentation or thought process illustrated by this post helped me learn to think better.” (++)
“This post contains many factual errors” (––)
“This comment is literal spam” (––)
“The reasoning here is deeply bad.” (––)
People instinctively use upvoting to cover both Flow and Importance, and this often results in people upvoting things because they were a good thing to say in a conversation. But then later, if you want to find the most useful comments in a discussion, you end up sifting through a bunch not-actually-useful stuff.
People also often unreflectively upvote things they like, without paying much attention to whether the arguments are good, or whether it’s good for the longterm health of the site. This means people who think hard about their upvotes get counted just as much as people casually clicking.
So the idea here is that by default, clicking results in a Normal Upvote. But, if you hold the button down for a couple seconds, you’ll get a Strong Upvote. (And same for downvotes).
Can you technically Strong Upvote everything? Well, we can’t stop you. But we’re hoping a combination of mostly-good-faith + trivial inconveniences will result in people using Strong Upvotes when they feel it’s actually important.
I have some more thoughts on “what good effects precisely are we aiming for here”, which I’ll flesh out in the comments and/or the final blogpost when we actually deploy this change to production.
Vote-Power by Karma
Quick overview of the actual numbers here (vote-power/karma)
Normal votes
2 – 1,000 karma
1 – 0 karma
Strong Votes
16 – 500,000 (i.e. Thousand year old vampire—the level above Eliezer)
15 – 250,000
14 – 175,000
13 – 100,000
12 – 75,000
11 – 50,000
10 – 25,000
9 – 10,000
8 – 5,000
7 – 2,500
6 – 1,000
5 – 500
4 – 250
3 – 100
2 – 10
1 – 0
(We considered using another log scale, but log5 didn’t quite give us the granularity we wanted, and smaller log scales produced weird numbers that just didn’t really correspond to the effect we wanted. So we just picked some numbers that felt right.)
Feedback
We’re still hashing out the exact UI here – in particular, the UI for helping users discover the feature. (Posts basically have little-to-know discoverability, comments have a little hover-over message).
Check out lessestwrong.com and note your feedback here. (If you created your user account recently, you may need to create an alternate account on the lessestwrong development database)
- LessWrong Has Agree/Disagree Voting On All New Comment Threads by 24 Jun 2022 0:43 UTC; 154 points) (
- Agree/disagree voting (& other new features September 2022) by 7 Sep 2022 11:07 UTC; 142 points) (EA Forum;
- How to use the Forum by 5 Sep 2018 17:22 UTC; 99 points) (EA Forum;
- Guide to norms on the Forum by 28 Apr 2022 13:28 UTC; 59 points) (EA Forum;
- Forum user manual by 28 Apr 2022 14:05 UTC; 41 points) (EA Forum;
- EA Forum 2.0 Initial Announcement by 19 Jul 2018 11:25 UTC; 41 points) (EA Forum;
- GreaterWrong—new theme and many enhancements by 1 Oct 2018 7:22 UTC; 35 points) (
- LW Update 2018-06-11 – Vulcan Refactor, Karma Overhaul, Colored Links, Moderation Log by 12 Jun 2018 0:49 UTC; 32 points) (
- What’s Changing With the New Forum? by 7 Nov 2018 23:09 UTC; 19 points) (EA Forum;
- Source of Karma by 9 Feb 2020 14:13 UTC; 4 points) (
- 6 Jun 2023 13:55 UTC; 4 points) 's comment on How could AIs ‘see’ each other’s source code? by (
- 25 Sep 2018 22:52 UTC; 4 points) 's comment on Track-Back Meditation by (
- 21 Apr 2023 21:15 UTC; 3 points) 's comment on Open & Welcome Thread — March 2023 by (
So here’s a thing I’m a bit uneasy about with this, after living with it for a while: since every comment is, by default, strong-upvoted by its poster but only weak-{up/down}voted by others, and since most comments don’t attract huge numbers of votes, it seems as if a typical comment’s score says much more about who posted it than about what anyone else thought of it, and I’m not sure that’s how we want it.
Is it feasible to (1) make comments be only weak-upvoted by their posters, and (2) remove your ability to vote on your own comments?
(I don’t know whether that’s the right solution. Suppose A posts something and B really dislikes it and strong-downvotes it, and no one else cares much. Then the comment’s score will be negative even if A has much more karma than B, which may not be what we want.)
Note for future people: this suggestion was eventually implemented.
Well, (1) was implemented, (2) wasn’t.
Curses me not reading carefully and then getting halfway through the comment and thing “yes! Check! Did it!” literally in the middle of a 2-part-clause that literally spelled out it’s 2-part-ness
Did you end up implementing part two?
No, and don’t currently plan to. (It’s plausibly more correct to implement for agree-voting, but I currently think it’s fine for users to occasionally strong upvote their comments when they think a particular comment is particularly important. (I haven’t seen any issues making me think people are currently a lot of adversarial self-upvotes)
We might change our mind on this, but that’s my current take.
I disagree here. Every time I’ve seen someone strong-self-vote their comment, I’ve felt it was pretty bad. It’s rare, and not obviously worth the effort to fix, but I wouldn’t agree with Ray’s take.
on my shortform i used self-strong-upvote to sort the videos list in a way that other users could vote on
Update: This has now been pushed to the live site. For now, on Desktop, strong upvotes require click-and-hold. Mobile users just tap multiple times. This is most likely a temporary solution as we get some feedback about how the respective modes work under realistic conditions.
It’s likely there’ll be a few kinks to work out. Let us know if issues come up.
I use LW on both desktop and mobile and sometimes get confused between the gestures, so I think adding double-click as an option on desktop would make it slightly easier to use.
Hmm, I am worried that tons of people have now built desktop-specific habits on how they deal with voting that this would upset quite a bit (probably including me). But I will think about it.
It also occurs to me that it makes strong-upvoting too easy for my taste, nearly as easy as regular voting.
That was the originally reason we made it a long-click. If it had been easier to setup long clicks on mobile we’d probably have just used it there too. (unfortunately long-click has conflicting purposes on mobile)
We got to discussing this on #lesswrong recently. I don’t see anyone here pointing this out yet directly, so:
This approach, hoping that good faith will prevent people from using Strong votes “too much”, is a good example of an Asshole Filter (linkposted on LW last year). You’ve set some (unclear) boundaries, then due to not enforcing them, reward those who violate them with increased control over the site conversation. Chris_Leong gestures towards this without directly naming it in a sibling comment.
In my opinion “maybe put limits on strong upvotes if this seems to be a problem” is not the correct response to this problem, nor would be banning or otherwise ‘disciplining’ users who use strong votes “too much”. The correct response is to remove the asshole filter by altering the incentives to match what you want to happen. Options include:
Making votes normal by default but encouraging users to use strong votes freely, up to 100% of the time, so that good faith users are not disadvantaged. (Note: still disenfranchises users who don’t notice that this feature exists, but maybe that’s ok.)
Making votes strong by default so that it’s making a “weak” vote that takes extra effort. (Note: this gives users who carefully make weak votes when they have weak opinions less weight, but at least they do this with eyes open and in the absence of perverse incentives.)
#2 but with some algorithmic adjustment to give careful users more weight instead of less. This seems extremely difficult to get right (cf. slashdot metamoderation). Probably the correct answer there is some form of collaborative filtering.
Personally I favour solution #1.
I’ll add that this is not just a hypothetical troll-control issue. This is also a UX issue. Forcing users to navigate an unclear ethical question and prisoner’s dilemma—how much strong voting is “too much”—in order to use the site is unpleasant and a bad user experience. There should not be a “wrong” action available in the user interface.
PS. I’ll concede that making strong votes an actually limited resource that is enforced by the site economically (eg. with Token Bucket quota) would in a way also work, due to eliminating the perceived need for strong votes to be limited by “good faith”. But IMO the need is only perceived, and not real. Voting is for expressing preferences, and preferences are unlimited.
Note: I would never punish anyone for their vote-actions on the site, both because I agree that you should not punish people for giving them options without communicating any downside, but more importantly, because I think it is really important that votes form an independent assessment for which people do not feel like they have to justify themselves. Any punishment of voting would include some kind of public discussion of vote-patterns, which is definitely off-limits for us, and something we are very very very hesitant to do. (This seemed important to say, since I think independence of voting is quite important for the site integrity)
It is not difficult to make people notice the feature exists; cf. the GreaterWrong implementation. (Some people will, of course, still fail to notice it, somehow. There are limits to how much obliviousness can be countered via reasonable UX design decisions.)
[emphasis mine]
This is a good point, but a subtle and easily-mistakable one.
There is a misinterpretation of the bolded claim, which goes like this:
The UI should not permit an action which the user would not want to take.
The response to this, of course, is that the designers of the UI do not necessarily know in advance what actions the user does or does not want to take. Therefore let the UI permit all manner of actions; let the user decide what he wishes to do.
But that is not what (I am fairly sure) nshepperd meant. Rather, the right interpretation is:
The UI should not permit an action which the user, having taken, will (predictably) be informed was a wrong action.
In other words, if it’s known, by the system, that a certain action should not be taken by the user, then make it so that action cannot be taken! If you know the action is wrong, don’t wait until after the user does it to inform him of this! Say, in advance: “No, you may not do this.”
And with this view I entirely agree.
It is my understanding that some or all of the LW team (as well as, possibly, others?) do not take this view. As I understand it, the contrary view is that the purpose of voting is to adjust the karma that a post/comment ends up with to some perceived “proper” value, rather than to express an independent opinion of it. The former may involve voting up, or down, strongly or weakly… I confess that I find this view perplexing, myself, so I will let its proponents defend it further, if they wish.
I don’t think it’s super productive to go into this with a ton of debt, but I do also think that voting is for expressing preferences, just that it’s better to model the preference as “on a scale from 1 to 1000, how good is this post?”, instead of “is this post good or bad?”. And you implement the former by upvoting if it is below your threshold, and downvoting if it is above, with the strong version being used when it’s particularly far away from where your assessment is. This gives you access to a bunch more data than if everyone just votes independently (i.e. voting independently results in posts just above the threshold for “good enough to strong-upvote” for a lot of users but to get the same karma as a post that is in the top 5 of all-time favorite posts for everyone who upvoted it).
In either case I am interested in an independent assessment, just that the assessment moves from “binary good/bad” to “numerical ordering of preferences”.
The problem with this view is that there does not seem to be any way to calibrate the scale. What should be the karma of a good post? A bad post? A mediocre one? What does 20 mean? What does 5 mean? Don’t the answers to these questions depend on how many users are voting on the post, and what their voting behavior is? Suppose you and I both hold the view you describe, but I think a good post should have 100 karma and you think a good post should have 300 karma—how should our voting behavior be interpreted? What does it mean, when a post ends up with, say, 75 karma? Do people think it’s good? Bad? Do we know?
This gets very complicated. It seems like the signal is degraded, not improved, by this.
It seems to me like your perspective results in an improved signal only if everyone who votes has the same opinions on everything.
If people do not have the same opinions, then there will be a distribution across people’s “good enough to strong-upvote” thresholds; a post’s karma will then reflect its position along that distribution. A “top 5 all-time favorite for many people” will be “good enough to strong-upvote” for most people, and will have a high score. A “just good enough to upvote” post for many people, will cross that threshold for fewer, i.e. will be lower along that distribution, and will end up with a lower score. (In other words, you’re getting strong upvote × probability of strong upvote, summed across all voters.)
If everyone has the same opinion, then this will simply result in either everyone strong-upvoting it or no one strong-upvoting it—and in that case, my earlier concern about differently calibrated scales also does not apply.
So, your interpretation seems optimal if adopted by a user population with extremely homogeneous opinions. It is strongly sub-optimal, however, if adopted by a user population with a diverse range of opinions; in that scenario, the “votes independently indicate one’s own evaluation” interpretation is optimal.
Your interpretation of the bolded part is correct.
Overall, agree on the whole asshole filter thing. After a few months of operation, we now have a bunch more data on how people vote, and so might make some adjustments to the system after we analyzed the data a bunch more.
I am currently tending towards a system where your strong-upvotes get weaker the more often you use them, using some kind of “exhaustion” mechanic. I think this still would cause a small amount of overrepresentation by people who use it a lot, but I think would lessen the strength of the effect. I am mostly worried about the UI complexity of this, and communicating this clearly to the user.
Also still open to other suggestions. I am not a huge fan of just leaving them unlimited, mostly because I think it’s somewhat arbitrary to what degree someone will perceive them as a trivial inconvenience, and then we just introduced a bunch of random noise into our karma system, by overrepresenting people who don’t find click-and-hold to be a large inconvenience.
Some important background thoughts here:
I think a lot of reaction against wide-ranging-upvotes is that upvote/downvote used to mean something clear and legible, and (both with the original advent of LW2.0, and more so here), it’s turning to something less legible.
And I think, importantly, the legible version of it was wrong – the sort of thing that’s easy to measure but not the actual thing I care about.
So one option is to make peace with the fact that karma-scores don’t refer to a clearly legible thing.
Another thing, which might be correct, is to make karma even less legible – rather than have it look superficially like other karma systems that refers to a sort-of-legible thing, instead karma is just, like, a percentage-score “how good is this post probably?” or, hell, like, a color, or doesn’t appear at all or something.
(Or a combination, where for the sake of transparency the meta-data of what’s going on under the hood is showed on hoverover, but the top-level result of karma is a color/percentage/vague-thing-of-some-sort)
I want to suggest having new posts/comments be normal upvoted by default (by the author) instead of being strong upvoted, for a couple of reasons.
Sometimes I just want to make a not very important or not very insightful comment, which doesn’t really deserve to have a higher karma (or to attract more attention) than many other comments, but if I manually drop it to normal upvote, people might interpret that as others having downvoted it.
Having a initially high karma on a post or comment makes it less likely for others to upvote it further (or reduces the total amount of upvotes from others), which reduces the positive feedback from making posts and comments for people who have high karma / voting power.
(I think I might actually prefer to go back to the LW1 system we had at the end, but I’m not sure if that’s a rational response or if I’m just used to it and don’t like change anymore.)
(the “initially high karma will make people less likely to upvote things” is an interesting concern that hadn’t occurred to me. I’m not sure how to best test it, but thanks for bringing it up as a thing to watch out for)
Another issue that I don’t know if you’ve thought of: strong votes from people with high karma are not very anonymous. If I’m talking with someone and I strong up/down vote them (like here recently), it’s pretty obvious who did it, yet not fully common knowledge, which makes the situation even more taxing on the social part of my brain than just straightforward non-anonymous voting (like on Facebook).
Yeah, that is something we’ve thought about, and aren’t sure how to think about. We’ve considered straight-up requiring Strong Upvotes to be an actual endorsement that comes with a name attached. (We’ve also considered actually just making all votes public).
Most of the options come with some pros and cons that weigh together in subtle ways.
My subjective experience in the first 2 weeks has generally been to not know who is voting on what – theoretically I could keep track of everyone’s strong-upvote strength. Maybe over time I’d come to know them well enough that I’d start to track that automatically, for now it just feels lossy enough that I don’t notice.
Having a high amount of voting power basically feels like a disadvantage to me instead of a benefit, because it makes me more reluctant to exercise strong voting power. Maybe other people won’t be able to tell who voted on something, but the part of my brain that worries about this kind of thing isn’t really mollified by your data point.
Maybe as a compromise that ameliorates both of the problems I mentioned, consider capping the strong voting power of everyone at some low amount, like 5?
Good to know. (I do want to stay in explore-options and gather data mode rather than leap towards any particular solution but want you know I’m taking this pretty seriously).
I’m curious how you feel about solutions that are more in the direction of “make karma more illegible?” – possibly literally just randomizing +/- a few points?
There’s also the option of “you can upvote UP TO your max-karma, but can choose any amount”. The two reasons we didn’t go with this was that it felt like asking people to making too-granular a choice every time they voted, and didn’t let us move in the direction of eigenkarma. But it’s still one of the more obvious things to try.
What kind of analysis are you thinking of doing on the data that you’re gathering? I’m curious, and also pre-registration may be a good idea in situations like this to reduce bias.
Doesn’t seem like it helps as much as other possibilities and the cost seems substantial (in implementation, user education, getting used to it psychologically, maybe other unforeseen consequences).
I agree with this.
I think I may share some of the intuitions motivating this, although I like to get more data before changing course. (I also have intuitions running in the opposite direction)
I do think this is going to feel unnaturally for awhile no matter what, and want to give it time to feel more-like-normal before drawing major conclusions.
Why would this be the case…?
My interpretation was “if people have a rough notion of how much karma something is ‘supposed’ to have, then if it already starts out at, say, 9, people will be less motivated to upvote it than if it started lower”.
… does anyone actually think like this? (Doesn’t this attitude obviously defeat the entire purpose of the voting system?!)
I do this myself for a couple of reasons:
Laziness—the marginal benefit of voting on something decreases with the absolute value of its current karma, but the cost of voting stays constant.
To prevent the “rich get richer” phenomenon, where if everyone pays more attention to posts/comments that have higher karma, but votes independently without regard to current karma, a comment or post that initially gets some upvotes would attract more attention and more upvotes. Similarly, if everyone did that, comments that arrive late or are in deeper threads or in unpopular posts will have much lower karma than more visible comments of similar quality. In other words, the karma would reflect visibility*quality rather than just quality and it would be hard for readers to recover the quality signal from that.
Can you say more about this?
Do people do this? They shouldn’t! (I certainly don’t.)
(You might say: “yes, Said, maybe you don’t, but most people do, and we’re not talking about your should-world”; but actually, we are talking about exactly that, aren’t we? So the question is whether that should-world is consistent. I think it is.)
That does not seem to have much to do with “how much karma something should have”, though. (Or rather, it does, but only if you already assume a notion of “how much karma something should have”; laziness cannot explain this notion.)
Karma is supposed to aggregate the commentariat’s opinions of posts/comments. If votes don’t express those opinions in the first place, then there’s nothing to aggregate and karma becomes pointless.
My rough guess from user interviews is that between 30% and 60% of users view karma that way (I tend to flip back and forth between the interpretations). I do not think it defeats the purpose of the voting system (currently at a retreat so can’t write a ton on why I think this, but I don’t think it’s obvious that it should)
This is shocking to me, and I would absolutely love to hear more about this perspective.
My main answer is “you get more signal out of the system if people assume votes have a ‘correct’ amount.”
Say we _don’t_ have strong upvote / small upvotes. There’s just a single upvote.
You see two comments: one pretty good, one great.
Say there are 100 clones of yourself using LW.
Would you rather both comments get 100 karma? Or the better comment to end up with more?
The only options here are
all 100 clones upvote both equally
all 100 clones only upvote the ‘great’ comment
all 100 clones run some sort of percentage-chance-to-upvote on the ‘pretty-good’ comment
all 100 clones upvote the pretty-good-comment if it has less than 35 karma.
The 3rd and 4th option seem roughly equivalent to me.
Variable-size-votes do enable the possibility of a finer-grained result with everyone voting purely based on their individual assessments, but if everyone voted for a comment exactly based on how good they thought it was, and you _add_ those votes up, then the karma ends up being more a function of “how many people saw the comment” than “how good it is.”
(Wei_Dai’s comments on ‘cost of voting’ and ‘preventing rich-getting-richer’ also play into it)
If you want an idealized system wherein the expectation is “everyone votes their true appraisal of a comment’s value”, then the way I’d implement it would be:
1. Comment karma is hidden until you vote on it (so you don’t anchor as much)
2. The karma displayed isn’t the total, but something more like an average (or a function that’s closer to an average than a total).
(I think this might actually be a good system, although it’s a pretty big change)
I also think that this is likely to be a pretty good system. (I can the outlines of some downsides, but would have to think further before I could describe them satisfactorily; in any case, it does not seem to me, at first glance, that any downsides of this system would exceed those of the current system. But this is not yet a strongly held opinion.)
Are these the only users, or is it me and my 100 clones and also some other users? I’m going to assume the latter, because in the former case the entire karma system is just pointless; correct me if my assumption is mistaken.
In that case, I strongly prefer the 3rd option. (The 2nd is an acceptable fallback position if the 3rd is unavailable.) The 3rd and 4th options do not seem at all equivalent to me; it seems strange to suggest otherwise.
My assumption here is that the outcome of options #3 and #4 are fairly similar. If Great Comment is 3x as good as Good comment, then option #3 looks something like “33% chance of upvoting” and option #4 is “upvote if it’s less than about 33 karma. Possibly downvote if it’s got so much karma that it’s getting sorted higher than the Great Comment.”
How similar the outcomes are depends on some of the precise numbers, but I think the order-of-magnitude* is about the same.
*where by order-of-magnitude I mean “base 4 order of magnitude”, which is the comparison that usually seems most relevant to me.
Option #4 is not “thread-safe”. It therefore can, and very likely will, cause chaotic behavior with unpredictable and potentially quite undesirable attractors.
Edit: Note that we may already observe this taking place.
Doublechecking what you mean by “thread-safe” (multiple people looking at the same thing at the same time making decisions before they see what other people do?)
Right.
Cool. That doesn’t seem like that big an issue to me, because the system has built-in error correction – people come back later, see that it’s slid past whichever direction they thought it should go, and can change their vote. It more robustly converges on Good Comment getting a proportionally correct-ish score (whereas the “roll a die in your head, upvote 33% of the time” will some non-trivial portion of the time result in things getting way-the-hell upvoted (or not) that should have been).
I should note: I don’t all think this is necessarily the best approach, just, it’s an approach that seems “reasonable” enough that describing it as ‘defeating the point of the voting system’ doesn’t seem accurate.
The core of the problem remains: it requires users to know what other users are doing (as well as how many other users there are, and how many other users are paying attention to a comment, and other such things). The cognitive overhead is tremendously higher. The potential for error is (thus) also much higher.
I need to not think about this topic too much—I get wrapped around the wheel trying to find foundations and reasons for trying to be so ludicrously fine-grained for something that doesn’t do very much except in cases of voting wars (which we should deal with administratively via moderators, not programmatically via game points). It makes me angry out of proportion to the (lack of) impact.
I like being able to choose no, weak, or strong votes. I don’t like having to calculate (because I don’t like the fact that) my strong upvote is equal to X’s normal upvote. I don’t like that my choices are 2 or 6, and I can’t give 1 or 4 when appropriate (and if I had more karma, it would be worse).
For UI, _please_ don’t overload long-click to mean something other than browser context menu. Use separate affordances, or a pop-up slider, or something.
On which platform does long-click bring up the browser-context menu (mobile)? It seems like that would interfere with drag & drop in various circumstances.
If mobile, then I agree that I want to be careful with that. We haven’t yet properly tested the UI experience on mobile, and if we find that it interferes with a bunch of other functionality, we might change it to double-tap for that platform, or maybe “swipe right”, or something like that.
I’ve just built a double-click-based design for this (for GW), which seems to me to feel pretty intuitive, so consider this a vote for double-clicking. (PM/IRC me if you want the link to take a look at it, though it’s pretty much what you’d expect—nothing terribly surprising.)
Yeah, we have a doubleclick based variant as well (as well as a “just have 4 buttons” one). It certainly works pretty straightforwardly. The main reason I didn’t like it was it felt *too* easy to do, such that I could easily imagine people getting into the habit of doing it most of the time, and I’d prefer *some* kind of trivial inconvenience.
It *should* be enabled for mobile now although it looks like I screwed up a css class somewhere or something.
some thoughts before i try this out:
I am worried about this thing where I both want to know: how many ppl liked a thing vs how strongly ppl liked a thing. More for posts than for comments. For posts, if I see a number like 100, I am very confused about how many ppl liked it. It seems to range between 20-50. But if… the vote power actually goes up to 15. Then… I will be confused about whether it’s like… 10-50. That’s… a big difference to me.
I’d almost would like it if for posts, 1 = normal 2 = strong for ppl with lower karma. And for people with more karma, it’s 1 = normal 3 = strong. ? Or something that reduces the possible range for “how many ppl liked the post.”
There’s also a clear dynamic where people with 4-6 karma tend to check LW more frequently / early, so … um… karma tends to go up more quickly at the beginning and then kind of tapers off, but it’s like…
I dunno, it’s kind of misleading to me.
Why do you top out at 16 instead of 5? I’m just … confused by this.
Kind of wish all ‘weak votes’ were 1, too, and karma scores only kick in if you strong vote.
First: curious how much these concerns go away if we straight-up added a hoverover tooltip that said “this post has W upvotes, X Strong upvotes, Y downvotes, Z strong downvotes?” (this is something we’ve considered and don’t have any obvious objections to)
Habyrka and I have chatted a bunch about some of this. I’ve personally leaned towards “it may be psychologically better for small-upvote to always be ‘1’ and big-upvote is the only thing that ranges”, but this was more of an intuition/guess on my part. Habryka’s take was that it was pretty important to have people who’ve been using the site awhile and understand the culture to have a bigger small-upvote signal.
I want to have this feature, because right now if I want this information the only way to get it is to constantly refresh my profile page to see how the karma on my posts and comments have changed, which over time lets me infer something about how many normal/strong upvotes/downvotes I got.
Personally, I like not being able to tell how many downvotes things have gotten. On the old LW, I frequently checked the percent up/down that comments and posts got, and it primed me a lot more to feel defensive or like I was in an adversarial environment. The triggered emotion is something like ‘Oh, this awesome thing inexplicably got 20% downvotes; I need to be on the lookout for bad people to push/strike back against.’
Hmm. This is an interesting point, which I think I agree with (having recently been testing out this feature and getting to see some of my comments get downvoted, and having a bit of the reaction you described).
I _also_ agree with Wei_Dai’s point, which is that right now we’re incentivizing people to learn the system but in a delayed, confusing way. Doing something to help people learn what the system means may be good.
Unreal’s suggestion of “just once, publish a list of posts and comments with their associated vote-types, so you can get a feel for what it means” is also plausible, although I note that this doesn’t really make it discoverable for new users.
Hmm. Hmm. Hmm.
So possible ideas:
1. Listing _total_ votes is helpful for at least inferring roughly what a post’s overall engagement is, which I think is valuable user feedback, and might get you 60%-80& of the way there. I notice that I feel way less “raised hackles” if I see my karma go down than if I see an explicit “you have been downvoted” remark.
2. Crazy idea: list total number of votes, and then list _strong upvotes_ and _strong downvotes_, based on the theory that… social management of the micro is bad and that it’s fine to roll regular upvotes/downvotes in a vague “total karma / vote count”, but that strong upvotes/downvotes should communicate something more.
3. Crazier idea: what’s most important to me, I think, is knowing if _people I respect_ have downvoted a thing. Theoretically, ideally, karma correlates with respect. In practice, some people have domain expertise and/or worldviews that lead me to weight their opinion on a post more highly. This gets into all sorts of complications and we wouldn’t get around to for months even if we thought it was a good idea, but you might have a thing where people can opt into seeing each other’s votes (if they are something like “mutual friends”)
I would be sad if my small upvote was basically zero relative to my strong upvote.
Would you still be sad if your strong vote was maxed at 5?
1:15 is a big difference! But 1:5 is a lot less. And 1:3 is even lot lot less!
Yeah, I was just objecting to Ray saying keep small-upvote at 1. I think I like the current set-up.
On reflection, I also *think* I like small-upvotes only going up to 3 more than even 5. It means that the expertise of trusted users comes out more at the tails than the average.
That is too many numbers to parse! I only care about the # of ppl who’ve interacted with the post. Can I just have THAT number as a tooltip? That would mostly resolve my concern here.
Also, it’s kind of weird to me that I have 5 vote power given I’ve only really interacted with this site for… a few months? And you guys have, like, 6? 7? Are you sure your scaling is right here? :/
It seems good to me that the people who’ve contributed the most to the intellectual culture and ideas on the site have 10-15 karma, relative to 5-8 like yours and mine. I’m only working on building the site, I didn’t invent a new decision theory.
It also seems good to me that a newer user like you, whose written some great posts and were curated, has more influence on discussion than others who’ve been around a similar length of time.
Sure :P
An important thing here is that we’re _not_ wanting “number of people who’ve interact with a post” to be the dominant thing people find themselves optimizing for. That incentivizes content that is some combination of clickbaity, tribal, viral, or minimally-and-unobjectionably-good. i.e. the rest of the internet.
We want to incentivize content that thoughtful people consider and think is real good, and the central thesis here is that the natural impulse to ask “how many people liked a thing” isn’t actually the best guiding star for incentivizing good content (either from a site design standpoint, or from a personal-motivation of individuals standpoint)
Well, that’s part of the reason why we’re increasing the scaling somewhat – if Strong Votes capped out around 5, there’s even less gradation between experienced users and new/medium users.
We do want want thoughtful, good writers to have a fairly easy time getting to a voting power that feels midrange, but then room/incentivization to continue to grow. In the 1-5 strong upvote scale, that’d mean getting “3” fairly quickly, and then little gradation between people with 2000 karma and 200,000.
That makes sense.
But it’s really confusing for my models of the post.
Cause there is a real difference between (lots of 2-users voted on this vs. a few 5-users voted on this). Those feel very different to me, and I’d adjust my views accordingly as to whether the post was, in fact, successful.
I get that you’re trying to make “lots of 2-users” and “a few 5-users” basically amount to the same value, which is why you’re scaling it this way.
But if a post ACTUALLY only has 2-users upvoting it and almost no 5-users, and other posts have 5-users voting on it but very few 2-users, that seems … worth noting.
Although, you could prob achieve the same by publishing an analysis of upvote/downvote patterns.
You could, for instance, release a list of posts, ranked by various such metrics. (Ratio of low:high user votes. Ratio of high:low user votes. Etc. Etc.)
That would be interesting!
This seems neat and probably worth doing, although this seems like even more interpretive effort than “mouseover to see how many votes of each type you got.”
I prefer the one-time cost vs the many-time cost.
Not sure how it’s a one-time cost? I was assuming the list of posts only comes out every so often, so every time you want to know the results for a new post you have to wait for such a list and then check it, and then you’d have to check it again for each new post you’ve written.
(If you were imagining the list-of-posts getting continuously updated, that doesn’t seem much different than simply providing the “number of votes (upvotes?) metadata on hoverover, in addition to or instead of the number of strong upvotes”. And while we could do either of those things, the main thing I was getting at is “we’re hesitant to make the goodhartable number the most easily accessible one.”)
I was assuming the list comes out once → I learn enough to understand what types of posts get what voting patterns (or, I learn that the data doesn’t actually tell me very much, which might be more likely), but after that I don’t need any more lists of posts.
I don’t care if it has my own posts on it, really. I care more about ‘the general pattern’ or something, and I imagine I can either get that from one such list, or I’ll figure out I just won’t get it (because the data doesn’t have discernible patterns / it’s too noisy).
Oooohh gotcha that makes total sense.
Random notes of my personal experience after a couple months of Strong Upvotes:
I find myself frequently wishing to have roughly 3 levels of granularity:
“mild social approval” (weak upvote) – used liberally, whenever someone does something I think is a step in the right direction that I want to positively incentivize.
“strong social approval” (I guess medium upvote?) – used somewhat sparingly when I observe someone making significant efforts to, say, write up predictions or cite references or build models… but which aren’t necessarily at a point where I think it’s important enough to demand other people’s time and attention.
“strong signal of quality.” – Used when I think something is some combination of true, important and well thought out, such that I think other people should definitely be paying attention to.
I want the second and third levels to be something that has to be rarer in order to be effective (mostly likely simply becoming less powerful the more you use them)
Yeah, I’d like a higher effort way to signal even stronger that I like some content.
What about having users send a short private message explaining their strong vote? It’s more of a speed bump than forcing someone to hold the mouse down, and it seems valuable in either case:
For a strong upvote, verbal praise is a powerful reinforcer. Consider +10 karma vs “Reading your comment helped me X, and now life is great”. Nudging people to say the second without any worry of cluttering up a thread seems potentially very good.
For a strong downvote, see this post. (Could also be an argument for downvote-free zones.)
I kinda agree with Dagon that we’re entering the zone of diminishing returns, but I could see this being very useful if the site was in danger of being overrun.
Trivial inconvenience alert: My regular username/password did not work on the lessestwrong site, and I had to create a new account.
Cool. We have considered doing the “strong vote requires short message” thing, and it might well be a good combination of useful + the right amount of trivial inconvenience.
Main reason not to (yet) is it’s an extra layer of complexity to code, and it seems better for a few reasons to try the simpler version of the thing first.
I am worried about how easy this makes it for one individual to come along after people have stopped paying attention to the thread and massively distort the voting. Like, the combination of a strong upvote and a strong downvote could easily outweigh many people acting in good faith.
Hot take: that isn’t something people couldn’t have done before, if-and-only-if they had a very strong upvote/downvote. I don’t think adding gradations much increases the affordance/incentive for people to do it more than weighted karma did originally. As before, karma is a certain measure of trust.
The problem is that one strong vote can beat many average votes, even from equally high karma users. This kind of outvoting couldn’t have happened before.
That does seem like a potential concern, but I think that’s something I’d file under the “maybe put limits on strong upvotes if this seems to be a problem.”
It doesn’t seem intrinsically bad if someone comes along and strong-downvotes a bunch of stuff after the fact – that may in fact be the system working as intended, especially if the upvotes were small-upvotes by people giving a mild “ah, this seems good”, and the strong-upvoter said “actually, I think people are missing a subtle way in which these comments were wrong, or were probably all groupthink/applause-lighting, and yes these comments should not be as highly upvoted.”
My threat-model mostly fires if someone was doing that all the time or something.
A simple thing you guys could do is freeze the scores on all the LW 1.0 content that got imported and used the old voting scheme. Otherwise the signal from those old votes may gradually get degraded by opinionated archive readers.
Initial feedback:
I find it unintuitive that, after I’ve strong up-voted, clicking to up-vote again makes it go down to soft up-vote. I feel like it should go down to zero.
I have 7,177 karma (so my strong up-vote should be 8) but empirically my strong up-vote is 7 (edit: solved, I have a different quantity of karma on LessestWrong)
The strong up-vote at the bottom of a post is off-centre:
(will add more below as I have things to add)
If I vote on something, go to a different tab for a while, there’s occasionally weird things going on. One time I’d upvote something by 9 instead of 7, and one time I’d weak down-voted something as well as strong up-voted it, leaving it at +5. #Bugs Edit: It did the latter again, here’s the pic:
I also want to mention that it wasn’t super clear to me exactly when the strong up-vote had completed, sometimes I let go too early. This doesn’t obviously seem wrong to me of course, just thought I’d raise the data.
Unrelated: I *love* the green links! It’s so LessWrong coloured.
Oh, But my main feedback is that I love this feature. I’ve always wanted to be able to up-vote many things, but also to save my up-vote for the rare things that only I see are excellent, and that was pulling me in two different directions. Now I get to do both! Yay!
I feel like the voting icon jumps a *lot* when the number goes from single digits to double digits.
Bear in mind you have a different amount of karma on Lessestwrong
Ah! I had used the search bar to check my karma, which uses the same search as LessWrong.
I think you should give more time for the current multipliers to play out. In addition, I think having a separate button for super up-voting is better than click and holding, especially for mobile use and compatibility with different interfaces.
edit—also we ALREADY have a system for Chosen Ones to super-upvote or super-downvote things by promoting them or moderating them out of existence, I’m not sure why we need to add an extra form of super-multiplier effect.
Oh no, it is the future and I did not write those comments AFAICT, or at least I’m not sure if I did.
2018_Raemon, what did you see!?
I’ve started removing the default strong upvote on my new comments, because it gives them more weight than I want.
I was wondering if that was intentional when I came here-here the post doesn’t seem to mention it, so I wasn’t sure if it was a bug just happening to me, thanks for mentioning it!
I’m doing the same thing.
Note that these numbers approximately follow a log2 curve.
I think we literally started with log2.6 or something, and then fudged the numbers until they looked pleasant.
Just for the benefit of anyone searching for information about this: one thing that’s changed since this was posted is that the weak-vote strength is always either 1 or 2. Not even thousand-year-old vampires have weak/normal votes of strength > 2. (It was changed in February 2020.)
(Also: empirically I find that I almost never strong-vote on anything, and looking at the scores things are at it looks as if the same goes for LW’s other high-net-karma individuals.)
that link seems broke
Hopefully fixed
I think this is no longer correct, looking at the link I believe displays the code: https://github.com/ForumMagnum/ForumMagnum/blob/devel/packages/lesswrong/lib/voting/voteTypes.ts
I don’t think anyone is likely to care, but it is my understanding that a new reader will see the sidecomment when they read the post and so this clears up that possible confusion.
FYI I find it almost impossible to strong vote on iPhone safari. Safari interprets double tap to mean zoom (in/out), and the hit target is very small even when zoomed in, making it too frustrating. I get that it’s not supposed to be trivial to strong upvote and I also get that phones don’t allow the same range of interactions. Not sure if there is a good solution, just bringing it up.
Adding for redundancy: you don’t have to double click fast. Clicking two times with any time gap between them works. So you can tap slower than the zoom action.
Ah thank you!