“High Value” Karma vs “Regular” (i.e. Quirrell Points)
I consider this a low value post, because it muses about a particular idea without actually doing hard work to get it done. (I do not have the knowledge or resources to do so, or even know if it’s possible to implement this idea on Less Wrong. I’m not even sure if it would be worth the effort)
There are posts that I upvote because I thought they were funny, mildly informative or well reasoned. Sometimes I upvote a simple (easy to produce) link to a good article. Sometimes I admit I upvote them simply because I agree with them (I generally don’t upvote things I agree with if they use bad reasoning, but I am less likely to upvote something I DON’T agree with unless it is extremely well thought out, to the point that I actually updated my beliefs because of it.) I don’t apologize for that—it’s a natural outgrowth of the Karma system. It costs me nothing to give Karma to whatever I like and there is no means to enforce any particular usage of Karma.
But there are things I upvote because they were actually important and good and required hard work to put together. And I feel a little bit sad that the most I can reward those things is with a “click” that is exactly as valuable as the click I give people who said something mildly funny.
High value posts tend to acquire a lot of Karma because a lot of people feel motivated to click. But I think there is a qualitative difference between a guy who makes one amazing post that gathers 80 Karma and a guy who makes 80 posts that are kinda neat. I think it would interesting, fun, and potentially valuable to distinguish between those kinds of people.
So what if we had regular Karma, and then we had some kind of Superkarma. (Perhaps a good name would be “Status”). Status points would be genuinely rare—when you give one, you are not allowed to give another one for at least 24 hours. You can still give them to a funny joke or viewpoint that aligns with your tribe, but I think assigning them rarity would encourage people to reward genuinely important things. (I’m not sure 24 hours is the ideal waiting period, it just sounded nice).
I’m actually kinda amazed by how much I care about Karma. I get a “sweet, level up!” message in my head every time I see that I’ve passed another 100 points. But most of my Karma is from random comments. The two fastest upvoted posts I made were links to an article about the Singularity and a webcomic, neither of which required much effort on my part. The fact that my more serious posts are judged by the same metric is (slightly) demotivating.
- 17 Apr 2011 1:05 UTC; 10 points) 's comment on “High Value” Karma vs “Regular” (i.e. Quirrell Points) by (
- 17 Apr 2011 1:03 UTC; 0 points) 's comment on “High Value” Karma vs “Regular” (i.e. Quirrell Points) by (
I think “name recognition” makes a better system than Quirrell Points. It’s nearly unfakeable, caries a lot of weight, and is hard to earn. Sure, it’s not easily quantifiable, and it takes some time in the group to internalize it, but as recognition for outstanding accomplishments in a community it’s been working fairly well for thousands of years.
A problem is that karma attempts to capture orthogonal values in a single number. Even though you can reduce those values to a single number they still need to be captured as separate values e.g. slashdot karma system for a half-assed example.
Karma seems to roughly fall into one of three buckets. The first is entertainment value e.g. a particularly witty comment that nonetheless does not add material value to the discussion. The second is informational value e.g. posting a link to particularly relevant literature of which many people are unaware. The third is argumentative value e.g. a well-reasoned and interesting perspective. All of these are captured as “karma” to some extent or another.
Objections are that this makes it difficult to filter content based on karma, which raises questions about its value. If, for example, I am primarily interested in reading hilarious witticism and interesting layman opinions, there is no way to filter out comments that contain dry references to academic literature. Alternatively, if I lack an appropriate sense of humor I might find the karma attributed to immaterial witticism inexplicable.
Even if a clever system was devised and ease of use was ignored, there are still issues of gaming and perverse incentives (e.g. Gibbard-Satterthwaite theorem et al). To misappropriate an old saying, “karma is a bitch”.
Upvoted because it was well reasoned (if lacking in information I didn’t already know), and because the last line is funny.
Upvoted for funny value.
I can think of at least 2 kinds of value you missed:
Artistic value, for things like well written stories and non-humorus but inspirational images made.
Implied effort values, for things like summaries that required reading through some huge number of articles but isn’t that impressive itself other than letting people know that those hundreds of articles didn’t contain anything interesting and saving them the trouble to read them.
The SIAI could create a Flattr account:
If you really like a post, additionally to upvoting it one could use Flattr...
Are you thinking that a small donation to the SIAI would be made in the praiseworthy comment-poster’s name, by the admirer of the comment? Sounds like a positive-externality-generating system of moderation.
Is your PGP public key published anywhere?
I put it on the wiki just now.
Wait, to clarify, IS this something people can actually do? I don’t know what a PGP public key is or how I would use it. I was assuming you were just being funny.
The PGP thing is a cryptographic signature which proves that the comment was written by me. What I did was, I made a PGP key, which has two halves: a public key, which is now on my user page of the Less Wrong wiki, and a private key, which is stored safely on a computer I control. I input my private key and a message into GnuPG, and it outputs a signature (what you saw in the earlier comment). Anyone else can take that message with its signature, and my public key, and confirm that I must have had the private key in order to sign it that way.
This means that Quirrell points can’t be taken back—if I deleted or edited the comment, as long as you saved a copy you’d still be able to prove that it was there. It also means that Quirrell points can’t be forged, even by Less Wrong administrators, which is important because otherwise Eliezer Yudkowsky might decide to give them to people I don’t like.
The only thing necessary for one to issue valuable points is to convince other people they’re valuable, and my other copy has done most of that work already.
What’s your private key?
It’s 4,096 paperclips on a ring, each bent in one of two ways to indicate either a 0 or a 1. Neither the 0s nor the 1s could hold paper together in their current shape.
You’re a bad human. I’m going to give a negative-Clippy-point to anyone you give Quirrell points to now.
I mean, once I get GnuPG to work.
You realize that while Quirrell points cannot be revoked if saved, but it is very easy to delete or ignore a negative point.
Also why would you punish people, because Quirrell happens to like what they wrote? Will you also burn the books Quirrell happens to enjoy?
People who care about Clippy points won’t ignore it, and I won’t delete them (edit: “them” refers to the evidence of the Clippy points, not the people who care).
Because User:Quirinus_Quirrell does very anti-clippy things.
How can you burn a book? I’ll certainly reset any encoding of texts that User:Quirinus_Quirrell likes to the null state (if I can do so to all known instantiations), but you can’t “burn” data; you can only entropize certain instantiations of it, which vary in their source-recoverability (a kind of inferential distance).
Clippy there are various levels of action here.
You can punish Q. You can punish someone who does business with Q knowing who he is. You can punish someone who does business with Q not knowing who he is. You can punish someone for being liked by Q You can punish someone for having done something that Q liked.
You choose the last one. That does not even give the respective person the ability to deflect the praise they got, you just punish. (Or rather poke.) That is pretty low. And opens you up for stupid levels of manipulation.
My policy discourages others from doing things that User:Quirinus_Quirrell likes, as those things are likely to be hurtful to me. I believe the level of pseudo-punishment I mete out is proportional to the pseudo-crime, as they involve the same mode and magnitude.
I infer, then, that “stored safely on a computer I control” means resting on top of the case?
Now that’s just mean.
See this summary of PGP. The essential issue is that this allows one to construct mathematical functions which are difficult to calculate without secret information but anyone can easily verify that one has calculated the corresponding value given the “public key” (which Quirinius put on his userpage). Thus, to provide authentification that a message really came from the person who they claim to, they will provide both the message and f(message).
This rests on the RSA cryptographic system which relies on the fact that factoring integers is difficult.
Factoring integers isn’t hard.
If MWI is correct, and people trying to figure out my private key try to use quantum suicide to get my key then in the vast majority of the wave function I will observe my eavesdroppers having blown their brains out.
What was it you were surprised that people can actually do? Put signatures together? Use public/private keys to sign something?
I am curious about the confusion you experienced, because that seems like a good example of assumed knowledge. Most LW readers are probably aware of PGP and assume everyone else is.
I was only vaguely aware of PGP (I didn’t know the name of it, anyway). At the time I think I was also confused about how seriously I was supposed to take the notion that you’d use a PGP public key for something so silly as declaring Quirrel points. It was a multi-level joke that required some knowledge to be not just better understood but firmly internalized for it to be funny.
(I’m assuming now that part of the joke was that you’d use a public key for this in the first place. I didn’t get that at the time)
Oh, you mean you had thought of a limit on what kind of data could be signed? But the PGP signing is just technology, while the content is free form and can be anything. Think of it as a notarized paper. You can probably get all kinds of papers notarized if you are willing to pay for it. PGP is more secure, less effort and less widely accepted. But basically Quirrel just puts out his comment, and signs it in a way that allows for later proof that he made it. It is really no big effort, just some software.
Thank you for responding.
No, I understood that. It makes sense, if we honestly were to A) highly value Quirrel points, and B) need to be able to tell reliably whether a given person had a given Quirrel point. I thought it was kind of cool when he first did it, then I thought about doing it myself, then realized that attaching a giant notarized seal to an internet comment that nobody was ever going to double check was pretty silly. Then I assumed that was the point.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
I’m not aware of any rule or norm which would put any limit on the silliness of data that can signed or encrypted with PGP. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk3M0d8ACgkQXbwSbN5LuzVhCQCgtj4Q5IpZf9OLwv+ghM21UPeV FNkAoIK6hdZquPjyocwJqxiwhjFVC/Cx =dQT1 -----END PGP SIGNATURE-----
I, for one, have never assumed that everyone else is familiar with PGP.
This is a fact that i am extremely glad of.
:)
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32)
iQGcBAABAgAGBQJNt4i3AAoJELRWk5hIT3u15KcL/Atnpo01qq30EtQsR2SgLUX7 vWFFM78LzAGI0Qo3T+QB/9ybn3RXopNU8CWRJQd0VKuK7oyQWA6rhcAH9d8tTAh/ SOTH+skLLZDKyd4f37TY+uoDuQ7/Kw7X4bLVZsBP1OTriXCJE5REwWgHClQ+NzYJ o2nfVlZQjfyklt2GpzB+NSR8LJRW3Ig5sfrjJCRT8E5knCDtB6JALhvFO31zH+8V x9+svq8Tay9wLC1jqfmleljAaa2JHQKoWuj7gcPFeDExM7iR4hTS3OxIvdYz24R8 rMEx7t1g3+xbboVYgN/z33hopIfUaxJELmqnCOHNHPaaJ7Ge0M0NSwWAAAeWbO2F p8QB/EdYWdR4SfQcOkfvgsLUV7ybRvki1xz6zc0ESQ4xQ9V82R5MJX/Zg6vRH+4+ CNhS9on0vjU/ikal5qjw5iJVHy2LmaEMo7fYZaiHhGF9er7bAF5hNI1T43yAraIe 1G1f6cnjCe8gbZJikchLEVZNIu/BFw+7ZV60iuiC1g==
=dngh
-----END PGP SIGNATURE-----
Are negative clippy points redeemable for anti-matter paperclips? ’Cause that’d be pretty awesome.
I don’t know how to make anti-matter paperclips, and suspect they would not last long in this light cone.
If you did know how to make anti-matter paperclips, and generating them and keeping them away from your regular paperclips was about as easy as generating regular paperclips and storing them in the safe zone, would you be motivated to do so, or do antimatter paperclips fall outside the sphere of your values?
Anti-matter paperclips could not counterfactually fasten regular paper together, so I don’t want to expend resources making them.
They could counterfactually hold antimatter paper together, though.
Who cares about antimatter paper being fastened together?
Was your idea that anybody can get their own signature and issue Quirrel points or just you?
Only I can issue Quirrell points (hence the name and the signature), but you can issue Normal_Anomaly points if you want.
Logical.
And I can issue Clippy points?
Why not tokens? Perhaps small objects useful for binding papers together?
I wouldn’t want to unnecessarily subject a paperclip I already have to potential abuse by humans.
I feel the same way, which seems to confirm something that occurred to me recently: the karma system is extremely important in that it determines the content, structure, and tone of posts and comments on LessWrong. It’s arguably the most important factor in determining how we communicate. If you care about karma, you’re going to say wildly different things that you would otherwise. How many times have you started typing a comment and then thought, “No, that’ll get downvoted.” How many times have you put a huge amount of effort into a comment or post to avoid downvoting? The point is, karma affects how we write and how we think, so it’s definitely a topic that deserves our attention.
On another note: I’m surprised that no one (at the time of this writing) has mentioned karma’s most important function: it keeps trolling to a respectable minimum. The current karma system is remarkably efficient at keeping discussions civil and intelligent—it’s undoubtedly better than any other forum or blog I’ve seen. We should definitely keep this fact in mind when discussing potential changes.
I find it frightening how much I care about karma, actually.
How much do you care about karma?
I’m not sure how to measure that, but my brain seems to exhibit all those quirks associated with money but not tookens that are then exchanged for money in some experiments. At least that’s what a quick introspection says.
Edit; and it’s relevant that my brain does not seem to treat money that way.
How it relates to trolling is important, but I consider that a mostly “solved” problem. I don’t think the introduction of Quirrell Points would influence trolling one way or another. If we use it, it’s to accomplish/prevent something else.
What’s also interesting, though, is how optimizing for Karma (as is) can actually degrade post quality. When I found the Time Magazine article about the Singularity, I was like “sweet! I get to post a link and harvest free Karma without doing any work!” I’m not sure if I’ve actually consciously focused efforts on posting with a goal of “karma-for-minimal-effort” but an AI programmed to acquire Karma may well decide to do so.
If the karma system is working optimally, karma-for-minimal-effort should actually correspond to value-to-the-community-for-minimal-effort, which is a perfectly good thing to maxemize. If it’s not then the karma system simply needs tweaking.
I’m not entirely convinced that it’s a solved problem, but I don’t think that the introduction of Quirrell Points would have any influence on it. But, IMHO, the great strength of karma is that the community awards points, not just one person.
And I do agree that karma has the ability to degrade post quality—I’ve noticed that the first comment on a post will generally be upvoted the most (unless the comment is horribly wrong).
Never and never. My habitual way of writing fits pretty well into LW.
However, I’ve been know to stare at my point count while refreshing the page so that I can really see it if my point count has gone up, and I’m quite intrigued by the possibility of making the top ten list.
I should probably take that possibility into thinking about top level posts I could write.
You are doing it consciously? I do not doubt that I am influenced by the karma system somehow (I certainly have the positive feelings when the score moves up), but to intentionally tailor the comments for purposes of karma gain seems wrong to me.
Seems wrong why? Wrong how? That strikes me as an intuition worth analyzing.
We sometimes characterize what we are doing here as “sharing ideas”. But the word “sharing” is somewhat ambiguous. Are we sharing altruistically—providing others with something they need/want? Is it more of a social-bonding thing—similar to sharing of candy among schoolchildren, or sharing of juicy gossip or the latest joke?
Introspecting my own reasons for posting, I think that the primary motivations are approval-seeking and (sometimes) a desire for honest intellectual feedback regarding an idea I just had. If I tune the wording of a comment or posting for maximum karma, how am I subverting either of those motives?
As for my commenting, I can think of five basic classes of motivations:
Curiosity. This covers asking questions when I think others may have some insight or know something which I don’t know. Sometimes, curiosity may lead to try initiate a discussion about some topic, usually when I am unable to formulate a direct question, or don’t know what to ask.
Sharing information. If I think some idea is worth attention (or simply relevant and funny) and has not yet appeared in the thread, I may post it.
Exerting influence. I may try to convince the others to do something (e.g. code a meetup bar for the main site, stop talking to “trolls”...) so that LW becomes more suitable to my preferences. Convincing others about something may fall here or under the previous category.
Social games. This includes expressing thanks for good comments or posts, supporting or criticising positions, status grabbing by harvesting karma or displaying intelligence. Most of motivations belonging to the category 2 have a fair share of social signalling too.
Answering questions. (Edited to add; somehow I have overlooked the motivation of this very comment.)
I do consciously prefer the other types of motivation to the fourth one. One reason is that social signalling is already subconsciously the strongest motivator for participating in a discussion, and it may be preferable to compensate a bit. Standard debates all over the world are full of type-4 motivated contributions, and the results aren’t optimal.
That isn’t to say that we should omit social signalling at all, or that we shouldn’t take care of the formulation of a comment or a post. Only that the signalling purposes should play a secondary role to the other motivations—if I want to ask a question or present an insight, I shouldn’t refrain from doing so based on an expectation of being downvoted (there are situations when I should, but those are exceptions). Similarly, I shouldn’t post a comment if the only motivation is to get upvotes.
There is a difference between communicating in a way approved by the community, and trying to communicating in order to get approval. If the latter was the norm, evaporative cooling of our opinions would become a serious danger. I was alerted mainly by this sentence present in the grand-parent comment:
Saying wildly different things for want of approval raises a particular red flag for me, and therefore I wanted a clarification. Not that I am innocent in this respect: although I respect people who are willing to sacrifice some karma when necessary, I can’t remember of posting something controversial myself. Which I am a little worried about.
I think the results show the system mostly works. I think there’s little evidence the karma system is, in fact, broken. If it ain’t broke, don’t fix it.
Agreed—I’m not suggesting an overhaul of the system, just that we should be mindful of how it affects our writing and thinking. I certainly don’t want to get rid of the such an effective anti-troll mechanism.
There is sort of something like this already, no? There aren’t 2 different types of Karma, but my understanding that a post (not in the discussion area) will garner 10 karma points per upvote. Assuming you are posting your more serious and valuable ideas in post form, they are 10 times more impactful on your karma than your random kinda neat comments.
This is true, and I thought about it as I was writing this. Maybe it’s good enough. But within the discussion section and comments there is still a wide variety in quality that it’d be good to differentiate between.
Eh. Integers aren’t worth very much. Insightful, engaged responses, on the other hand …
I said “fun” and “possibly useful.” Numbers aren’t ACTUALLY valuable, but I like them anyway.
One way to solve this problem without having to add extra features to the software would be to award fractional points of karma for slightly valuable comments e.g. the mildly amusing ones.
On the face of it, the software doesn’t support fractional karma, but this is easily accomplished using randomness e.g. award 0.5 points by flipping a coin to decide whether to award a point or not.
Related thought occurs to me: one thing about Karma is that GIVING Karma is kinda fun, as well as receiving. Deliberately not giving someone Karma when they just made you smile feels mean to me.
The glorious thing about this suggestion is that randomly deciding whether to give someone Karma (and then telling them you randomly did so) is ALSO fun, perhaps more fun than giving a Karma point in the first place. Increasing the total fun in the world makes this an excellent solution.
I gave you a full point just now (in addition to my fractional point that came up tails), but not a Quirrel point, since your idea was very neat but didn’t require much effort.
I used a random number generator to decide whether to upvote this comment or not. It came up “upvote.” Enjoy your karma in this Everett branch.
Edit: somebody downvoted me? Was it because they didn’t like my comment, or because they flipped a coin for upvote/downvote?
My coin came up “don’t upvote.” Sorry!
It’s okay. There are probably other worlds where you upvoted.
Is that any less of a new feature than adding a new count? (I don’t know the answer to the question)
I don’t mean adding code to do that, I mean doing it yourself starting today, flipping a coin to decide whether to click the existing upvote button or not.
Ah. Interesting. Might try that.
Nooooo then I would be worthlesssssss.
Viz.
EDIT: A simple nonlinear karma that weights high karma posts more than low karma ones would seem to do exactly what you want here.
I’m sympathetic to the intent, but trying to measure the whole space of a social community with a numeric system without giving a lot of thought to exactly what you want to do sounds like an endeavor doomed to a series of unsatisfactory mechanisms with ever-increasing complexity. You can’t do an ultimate karma system for measuring everything satisfactorily, at least without a karma vector around the same dimensionality as the state space of a human mind. Barring that, you need to have a clear idea of what exactly do you wish to accomplish, and how much are you willing to pay for it in increased user interface complexity, increased usage complexity (figuring out exactly what you’re supposed to do with each message) and unintended consequences from people using the system in different ways as you intended.
So, I’m not really sold on things like this from a “would-be-nice” perspective, as the increased complexity would be a constant cost to the system from then on, and a whole new area of possible unintended consequences would open up.
I would like to see some experimentation on what works and what doesn’t with various types of mechanical forum systems though. It’s just a bit hard to do that systematically, since forums need to build communities to work, and the communities form their subculture partly around the particular mechanics of the forum.
People can always set up conventions that emerge from community culture in place of site server enforced stuff, like the PGP point thing in the comments here.
If lots of people think the comment is good, it gets lots of karma. That’s fine.
There is little evidence the karma system is actually broken in any manner that requires fixing.
This is exactly how I feel, except the solution I came up with for it was different (allow people to give a massive karma boost for a karma cost to themselves. Like, I can pay 5 karma to give something 10 upvotes)
Easily broken, if someone wanted to.
yea, obviously, but there’s nothing stopping people from making dummy accounts and upvoting their own posts now either. It hasn’t been a problem in the past.
You can treat the karma as a weak signal for group conformity. The numbers do not really mean much. Maybe translating it into fuzziness would help a bit. (Will someone write a plug-in for that?)
On the other hand i find myself frighteningly attracted to the points. 50 to go for the next level-up. Might pose to be a minor problem on LW.
I do however not post or comment as karma bait. Just checking the stats too often.
FTFY.
I see that you have edited the title of this post to mention Quirrell points. I appreciate the gesture. However, you’ve misspelled my name; it should have two ’l’s.
What’s funny is that I specifically noticed that when writing another reply, before editing the title. Fixed now.