April Fools: Announcing: Karma 2.0
Ever since we started work on the new LessWrong, improving the karma system has been top priority for us.
We started giving users a higher vote-weight when they themselves had accumulated enough karma, and have plans to implement various more complicated schemes, such as Eigenkarma. But we recently realized that more important than how karma is calculated, is how karma actually influences the user experience.
The key purpose of a karma system is to allocate the attention-bandwidth on the site towards users who make the most valuable contributions. Historically we have done this via sorting, but sorting is only a small fraction of how we can possibly allocate attention. And as soon as we realized this, the next step was obvious.
Adjust the size of your content, based on your karma
This means, as you get more karma on the site, your comments literally get larger. At the beginning, you will be a mere dust-speck among giants in the comments, but after you engage with the site for a bit, your words can tower over those of your contemporaries.
This changes everything. Vertical space on the page is now properly allocated according to our best guess about what content you will want to read. People’s influence and history on the site is clearly communicated to anyone skimming a comment thread. Competing with your friends for karma now literally translates to determining who has the larger “D”. The positive effects are too numerous to all list exhaustively here.
We believe this truly revolutionizes the way karma works on LessWrong, which is why we are proud to call this new system “Karma 2.0”. We also believe there are many more promising improvements in this direction. We are soon planning to experiment with coloring your comments green or red based on the ratio of upvotes to downvotes your content received, and adjusting the line-height of your posts based on our estimate of how vacuous your claims are (to properly signal to the user the correct ratio of content to “hot air”). Stay tuned for the upcoming “Karma 2.1” and “Karma 2.2″ which will implement these features.
However, if for some inscrutable reason, you want to disable Karma 2.0, you can do so by editing your profile (click on your username and then click “Edit Account”) and checking the “Deactivate Karma 2.0″ checkbox.
Signed Oliver Habryka, Ben Pace, Raymond Arnold, and Matthew ‘Vaniver’ Graves
- April Fools: Announcing LessWrong 3.0 – Now in VR! by 1 Apr 2020 8:00 UTC; 94 points) (
- Why Karma 2.0? (A Kabbalistic Explanation) by 2 Apr 2018 20:43 UTC; 15 points) (
- 1 Apr 2020 18:50 UTC; 7 points) 's comment on April Fools: Announcing LessWrong 3.0 – Now in VR! by (
Frankly, I’m not a fan of this. the main worry which comes to mind is that it’s possible for new people to get karma, so their opinions could be (conceivably) read.
Doesn’t go far enough; people with less karma than me are still legible.
This is fantastic! Are you still collecting feature requests for Karma 3.0? I propose adjusting the default font of each comment based on some combination of karma, upvote ratio, and whether an ML algorithm considers it insightful.
The possibility space of this new feature is endless! To give just one example, if a comment is figuratively incomprehensible, Karma 3.0 could make it literally so, by changing its default font to Wingdings.
Yes please! Give us more feature requests (for the next time we do this).
This Berkeley Dad TRIPLED His Karma With One Weird Trick (Mods HATE Him!):
Go To Your Profile
Press ‘Ctrl’ And ‘+’ At The Same Time
Repeat Until Attain Desired Karma Level
Instructions unclear, comment stuck in ceiling fan?
Additional formatting suggestions:
wHEn sOmeONe’s COMmeNt dOESn’T MaKe SEnSE.
wehn soenoeme’s cmnenomt rlaley dseon’t mkae sesne.
WHEN A COMMENT IS OVERLY AGGRESSIVE.
When the commenter is Italian.
כשהכותב יהודי.
Когда тот кто пишет скрытый коммунист.
Instead of just voting comments up and down, can we vote comments north, south east west past and future to make a full 4d voting system? Position the comments in their appropriate position on the screen, using drop shadows to indicate depth. Access inbuilt compasses on smartphones to make sure the direction is properly aligned. Use the GPS to work out the velocity and gravitational field exposure to make proper relativistic calculations. The comments voted into the future should only show up after a time delay, while those voted into the past should show up before they are posted. A potential feature for Karma 3.0+√2i .
I for one welcome our new typographical overlords.
This is a good start but you really need to implement differential kerning. Lofty words like ‘Behoove’ and ‘Splendiferous’ must be given the full horizontal space commanded by their dignity.
I’m sure this day will be remembered in history as the day that LessWrong became great again!
I would also like to have a little jingle or ringtone play every time someone passes over my comments, please implement for Karma 3.0 thanks
I’d like to report a bug. My comments aren’t larger than worlds, which is a pity, because the kind of content I produce is clearly the most insightful and intelligent of all. I’m also humble to boot—more humble than you could ever believe—which is one of the rationalist virtues that any non-tribal fellow would espouse.
Yeah, sorry: we tried making some comments literally larger than worlds, but then our world was crushed under one such comment and the people running our simulation had to restore it from an earlier backup. Then we had to promise not to cause such trouble again.
I notice your words are now larger thanks to the excellence of this comment!
I just made an account to say this: Please do not implement “Karma 2.0” or its followers on GreaterWrong. I do no want text to be as large as possible, I want it to be a particular size. When I go to GreaterWrong I habitually hit “-” three times as soon as the page loads because the default size is too big. This is not a problem, since zooming out does work properly. But if comments were all different sizes then I would probably ignore the biggest comments as well as the smallest just because they’d be annoying. Or I’d simply not read comments just like if GW didn’t exist.
Edit after 10 minutes: Naq V sryy sbe lbhe Ncey sbby’f wbxr.
Ah, but it did get you to make an account :)
Kidding aside, check out GW’s text size adjustment feature. Your browser will remember the text size you set, and you won’t have to adjust it each time you load a page.
(Caveat: this feature does require Javascript, and doesn’t work in Mozilla-based browsers. My apologies if that makes it unusable for your browsing setup.)
But won’t the Biggest Luke simply eat all the other users?
It annoys me that all comments are the same color. At least you could change the shade of gray as well.
Alternatively, mean comments could be displayed in awful vampire-type fonts in red, and naive ones could be displayed in comic sans.
A fluid serif/sans-serif font, where the serifs get progressively bigger the more formal your comment is.
Thanks team, this is exactly the sort of work I’ve come to expect from you. ;)
I expect that I (and many other users) would get more benefit out of this feature if it was more personalized. If I have personally upvoted a lot of posts by a user, then make that user’s comments appear even larger to me (but not to other readers). That way, the people who I like would be a “bigger” part of my Less Wrong experience.
It’s a bit concerning that you seem not have considered this possibility. It seems like this sort of personalization would’ve naturally come under consideration if LW’s leadership was paying attention to the state of the art in user experience like the Facebook news feed.
Wouldn’t this create an echo chamber where users keep noticing more from what they personally agree with, and consequentially less from the rest?
Facebook caters to you, the user. Less Wrong, in my opinion, should revolve around the topic of discussion, not the users participating in it and what they value as individuals.
I think it wouldn’t be harmful if activities from users you routinely upvote are made more visible to you in your “main page,” “feed” or whatever it’s called here. But once you enter an article and an open discussion is set in place, hierarchies should dissolve, not be accentuated. After all, what matters to us beyond a user’s personal profile is the quality of their ideas, not who they are.
Smart and thoughtful change, what a wonderful Easter surprise!
So when does our old LW karma get imported to make our posts even larger?
Users who hit 10,000 karma will get their old karma imported.
Please don’t upvote me I don’t want anyone to hear me
I’m so sorry.
I had the feature in my “LW Power Reader” of using color and font size to highlight just the metadata line of highly-upvoted comments, and that was very helpful for scanning for especially good comments in the middle of large threads. See here for a screenshot of what it looked like. I suggest that might be useful if LW ever regularly gets hundreds of comments per post again.
This seems perverse; the higher my karma, the fewer words I can fit on a page.
With great power comes great responsibility. As your influence grows larger, you should be urged to be more careful how you use the attention that you are given :P
I was gonna comment something witty, but actually I just want to measure my D. So here goes
Test of my size.
Finally.
No flashing neon text? A bit disappointing but happy to see refreshing innovation from the rationality community...
I’ve noticed that the larger font annoys me enough that I just scroll past it looking for more reasonably-sized fonts, leading to the exact opposite of the desired effect :/
Huh. Looks like Karma 2.0 is a failure. We’ll reverse it during our next available office hours.
Gosh-darn it.
Posting because my ego feels compelled to see how big my text is.
I’m loving this new Karma system!
Metaculus (a community prediction market for tech/science/transhumanist things) has a similar feature, where comments from people with higher prediction rankings have progressively more golden usernames. The end result is that you can quickly pick out the signal from the noise, and good info floats to the top while misinformation and unbalanced rhetoric sinks.
But, karma is more than just a measure of how useful info is. It’s also a measure of social standing. So, while I applaud the effort it took to implement this and don’t want to discourage such contributions, I’d personally suggest tweaking it to avoid trying to do 2 things at once.
Maybe let users link to Metaculus/PredictionBook/prediction market accounts, and color their usernames based on Brier score?
Then, to handle the social side of things, make the font size of their posts and/or usernames scale with social standing. Maybe make a list of people from highest to lowest on the ladder? You could ask users to judge each other anonymously, or ideally use machine learning to detect submissive gestures and whether or not displays of dominance are recognized by other commenter.
As the power of AI improves exponentially, and learns to detect ever more subtle social cues, the social ranking would become more and more accurate! Eventually, it would be able to tell you your precise social standing, to within ±1 person, and predict exactly what concrete advice to follow if you want to get in front of the person ahead of you. You’d know their name, personality, knowledge-base, etc, and could see exactly what they were doing right that you were doing wrong. It would solve social awkwardness, by removing all the ambiguity and feelings of crippling uncertainty around how we’re supposed to be acting!
I agree with others who commentet here that the aestetics of it isn’t really that satisfying right now. But i think the system has the potential to be good overall, so I don’t really want to turn it off. Maybe the differences should be less extreme?
This could be a good thing to try. Make it more subtle and also have more levels—I notice that my comments are of the same size as Qiaochu’s. That’s a little strange, since he has nearly 10x my karma. Honestly, only the karma he earned by commenting on my posts should count. Can the mod team look into this?
Excellent, my words will finally get the prominence they deserve!
So this...
I assume this is an April Fool’s joke, but it’s also more than that. I take it as a social experiment of sorts, even if it’s an involuntary one. Status is such a monster, it’s such a big part of our motivation, that literally attaching a quantitative display of your karma size to posts is just not going to be harmless.
And I’m not that thrilled with responses here. The biggest advance on how I handle signaling is to be more honest about it, which in this case means acknowledging that the motivation is real. Joking about it seems misguided. I think. Right now, I’m obviously struggling with phrasing this post on an appropriate level of humility and am worried to miss it, so, clearly, I’m not beyond caring about it. And I do consider status concerns as motivations to be largely harmful, so that’s bad.
I think it comes down to this: if it genuinely isn’t a big deal to someone and that’s why they joke about it, that’s fantastic. I’ve not nearly come that far. If joking about it is mostly about signaling not-caring-about-signaling (counter-signaling?), that’s bad – from a bird’s-eye view onto the community. And I suspect the latter is almost entirely true.
If it was just done as a mere joke, that also doesn’t seem good to me. I’ll put it like this: when Robin Hanson was asked about LessWrong (this was in a recent podcast), his reply was that he worries about people mostly using it to look for little signs of affirmation that they are already being rational. In other words, even after whatever progress we’ve made regarding status, it is still the primary concern for this site’s usefulness. So if it was done as a joke, insofar as that means that the people who decided to do it expected everyone to just take it as a joke, then, I think, they were just factually wrong.
This certainly makes me uncomfortable, but I’m also curious to see what comes off it.
Your comment made me wonder what humor/joking is about in general (in Robin Hanson’s “X is not about Y” sense). It turns out there is a really nice chapter in The Elephant in the Brain about it. Highly recommended. (At least that chapter. I haven’t read the rest of the book yet.)
If you can make a case for this, I would be very interested to read it. My current view is that a lot of “good” motivations such as altruism and intellectual curiosity are actually linked to status concerns at a deep level, and since it seems infeasible to get rid of status concerns as a motivation anyway, we should instead try to optimize our social norms and institutions to maximize the good side effects of status seeking, and minimize the bad ones.
There’s also this paper, which I assume the Elephant in the Brain chapter to draw on (though I didn’t check the references) (emphasis added):
I can’t really. I think our disagreement is subtle. I’ll explain my view and try to pin it down.
What’s bad about status is that it may cause us to optimize for the wrong goals, because it may motivate us for the wrong goals. By the time a goal is determined, status already has (or hasn’t) done its damage. Given a fixed goal, I would not consider it negative if part of the motivation was related to status.
This means it comes down to preference between two approaches. 1. Minimizing status seeking as motivations to avoid being motivated for the wrong things; 2. Changing the field so status motivations are better aligned with positive outcomes.
If I understood your position right, you think we should do #2. And I think we should do #2. I also agree that overcoming status concerns isn’t possible. But the sentence you quoted is still true, at least insofar as it relates to me. For now I’m agnostic as to whether the rest of what I’ll say here extends to anyone else.
To explain where I think your model stops working for me, I have to differentiate between the utility function U that I would like to have, and the utility function V that I actually have. The difference is that V makes me play a game when according to U I should rather read a Miri paper. Okay, now according to the master/slave post, the master has “the ability to perform surgery on the slave’s terminal values” – but I think it can access only V, not U. Among others, I think the example you give no longer works.
I don’t think this is possible for me. Big decisions are governed by U; the master doesn’t have access.
Now, on LW in particular, I can say with confidence that I’m not here to improve my status. I don’t write posts to improve status. But once I have written a post, then at least 70% of my concern goes into how it will make me look; and even during writing, a lesser but still significant chunk goes into that (larger for comments). That seems clearly bad. And I think it’s fairly representative of how I work in general. I’m trying to think of an area where status motivates me to do something positive that I wouldn’t otherwise do… the search doesn’t come up empty, but it’s less than your model would suggest. Removing status as a motivation entirely should be net positive.
I’m fairly confident that this explanation is correct, but again don’t know it generalizes (actually, the fact that I fully support doing #2 suggests that I don’t really expect it to generalize). The boldest thing I’ll say is that I’d be surprised if LW didn’t work better if status were taken out of the equation. I don’t expect activity to drop drastically. The link to intellectual curiosity in particular seems questionable. But I can’t make a stronger case for that, at least not yet.
Somewhat unrelated – The Elephant in the Brain suggests that the conscious part of your brain should be thought of as the press secretary, whose job it is to rationalize the things that the rest of the brain decides to do; to come up with bogus explanations for why you did what you did. This is fairly similar to your master/slave model, with the biggest difference being that the press secretary isn’t given her own (terminal) values, and the emphasis on rationalization. Obviously, I think the conscious part does have terminal values. I’d take a hybrid of them over each one.
OMG, please tell me this text size thing is an April Fools joke. It’s painful on my eyes, and as a web designer, painful to my soul.
This is ugly, and I do not want it. Can I disable it for my comments on others’ feeds?
I’m already deeply uncomfortable with the way scoring and gamification alters interaction. Making it so visible and in-your-face may well be the final straw to get me out of here completely.
Oh, so this is how it works” Before reading this article I noticed that some comments used large fonts, and my guess was that it’s the unread comments. And I thought that was an excellent solution! :D
Knowing the right answer, my feelings are ambiguous. I agree that karma should somehow translate into greater visibility. But the size of the largest comments is a bit uncomfortable; I suggest adjusting the size interval. (If there is a person with large karma who likes to write long comments, this will become unbearable.) Also, it probably should be some combination of user karma and comment karma, as MondSemmel suggested.
Jokes on you I can just edit the CSS.
I literally don’t even know if this is a real thing or not because I use GreaterWrong.
So......was this an April’s Fools or not? >.>
An answer to your question.
I have no RSS monkeying going on, and Wei Dai and Kaj Sotala have the same font size as you or me.