Easy Predictor Tests
A fun game you can play on LessWrong is to stop just as you are about to click “comment” and make a prediction for how much karma your comment will receive within the next week. This will provide some quick feedback about how well your karma predictors are working. This exercise will let you know if something is broken. A simpler version is to pick from these three distinct outcomes: Positive karma, 0 karma, negative karma.
What other predictors are this easy to test? Likely candidates match one or more of the following criteria:
Something we do on a regular (probably daily) basis
An action that has a clear starting point
Produces quick, quantifiable feedback (e.g. karma, which is a basic number)
An action that is extremely malleable so we can take our feedback, make quick adjustments, and run through the whole process again
An ulterior goal other than merely testing our predictors so we don’t get bored (e.g. commenting at LessWrong, which offers communication and learning as ulterior goals)
Something with a “sticky” history so we can get a good glimpse of our progress over time
- Feb 12, 2010, 3:56 PM; 3 points) 's comment on The Tragedy of Group Selectionism by (
Personally, if I expect a comment or post to get negative karma, I don’t make it. Perhaps that’s too conservative on my part, but it’s the case.
I make plenty of comments for which I expect negative karma. It’s not too hard to shield unpopular views in language that will soften the karma hit, but sometimes it’s better to state things plainly.
For a while, I had a rough handle on how various factors influenced the karma of a comment (like a link to a previous Lw post, or a relevant XKCD) but never crunched any numbers.
My intuition is that if you are about to make a comment for which you expect negative karma, nearly all of the time you should instead choose one of: a) send the comment in a private communication b) expand your comment with a more clear explanation of where you’re coming from c) make your comment into a top-level post (or posts if the idea needs introduction) complete with your arguments and reasoning for all to learn from without the miscommunication perils of quick comments d) refrain from commenting
Sure, I’d bet there are exceptions, but making plenty of comments for which you expect negative karma? Strikes me as off. Pros: you can comment faster and more often since you don’t need to thoroughly explain yourself, and since you don’t need to prune as many threads. Cons: negative karma presumably means readers would rather have not read what you wrote, which is a bad sign unless maybe it’s a topic that you feel will actually improve their lives even without sufficient explanation, even when after reading it they decide they wish they hadn’t.
That last bit? Sure, maybe it happens, sometimes… maybe.
Not necessarily. You can, for example, make comments that you expect to improve the lives of a majority despite the displeasure of a majority.
You are not well-calibrated.
I’ve been mulling over asking for a data set of posts to do some stats on. I’ve noticed that short pithy statements can get a lot of karma, and would like to see some stats of karma vs comment length.
I suspect that short, concise posts and long, thought-out ones both get higher karma than ones that fall in between.
I tend to share your philosophy. It is also not too hard to offset karma costs by making a few extra posts.
Interestingly, I am often surprised when posts that I am expecting to be a karma sacrifice turn out not to be. As you suggest, extra time phrasing potentially unpopular views probably makes some difference. Even stating things plainly can be done elegantly.
Voted down for reasons suggested by thomblake: it’s often important to make a comment that you expect to get negative karma.
I made a comment recently that got a lot of karma, and I must confess, I did expect it. I printed out the thread and read it all before making it to make sure it wasn’t already made, and I was quite proud of it, so I was really pleased people liked it.
Voted down just to ruin your expectation of positive karma (not really).
SHA-1 of my karma prediction for this post: 4154a31be35736ebed12ba65cbcb89cabc08b375
How will I ever guess the 7-bit message behind such a secure 160-bit hash?
Nice try, but I phrased my prediction as a sentence. Granted, the search space of all sentences describing a karma prediction is smaller than the hash search space, but it’s probably big enough that you won’t be able to guess my prediction and spite me by voting in the opposite direction.
Damn, should have read this before I checked!
I confess I can’t quite be bothered to check, but: it’s not salted? oops!
0 isn’t positive so I was wrong.
Yep, I was expecting that reaction. :)
You were obviously expecting the ‘not really’ part too!
Unfortunately your post is at +2 at the moment so it isn’t worth me bothering with a downvote.
Me too. I remain as absolutely terrified of negative karma as I was when LW started. (In fact, it’s possibly even worse now, since I have a reputation to uphold!)
Right now, I have a recent comment at −1, which for contextual reasons I can’t delete, and it’s driving me crazy.
(Yes, I know this is bad, and I shouldn’t be like this. Unfortunately, I don’t know how to change myself in this way.)
I’m curious why you care.
My guess: humans are naturally wired to care about what others think; that part of my brain is hugely oversized or overactive.
It’s not without its advantages, but there’s a tradeoff. I get a huge motivational high when something is well received, but the price I pay is near-depression when something isn’t.
Unfortunately the “exchange rate” for me is bad: downvotes are worth about five to ten times what upvotes are.
This approximately matches the impact that positive and negative social exchanges have on people in general in person. (From my recollection of relevant studies read years ago.)
Plus, downvotes actually do mean more, because people don’t make them nearly as frequently.
Wow, that’s interesting. I might like to read about that at some point.
It isn’t where I first encountered it but The No Asshole Rule: Building a Civilized Workplace and Surviving One That Isn’t included a reference.
Also discussed in How Full is Your Bucket?
http://www.lmgtfy.com/?q=impact+of+positive+and+negative+social+exchanges
;)
Somewhat offtopic, but I’d like to see someone writing a GreaseMonkey script which hides the name of the commenter and the current score level on all comments, so you’re not being influenced by the status of the commenter and/or the current score level on that comment. The commenter name could be seen with a mouseover so you can reply to it though.
You seem to be describing Marcello’s anti-kibitzer.
I’ve been using that for a while. It’s, not so much surprising as interesting, how much it changes one’s outlook.
It doesn’t deal very well with the “Recent Comments” sidebar, I added a .5s delay to wait while the AJAX loads. That’s not quite satisfactory, but I don’t know how to tell a GreaseMonkey script to wait for a specific AJAX query to load.
EDIT: OK, now I know. It would be helpful if LessWrong could be upgraded to a more recent version of the Prototype framework.
Cool idea. If there were an easy way to hide the scores on comments, you could wait a week to make the predictions and then immediately see if your predictions are correct. This would reduce the interval between making the prediction and seeing if it’s correct, which presumably would provide better training and feedback.
That is difficult, and one of the reasons I’m very hesitant to post. (Luckily, my first three posts were “important” enough to me that I almost didn’t worry about their scores before posting; as for the fourth, I somehow managed to predict the score almost exactly correctly.)
I’ve been thinking it might be useful to establish a consensus on the “meaning” of various levels of post karma. Here’s my top-of-the-head-suggestion:
0-10: Acceptable
10-20: Solid
20-30: Good
30 − 40: Quite Good
40+: Outstanding
100+: Unheard-of Stupendous Brilliance
I’d be curious to hear how this compares to others’ views of post scores.
Incidentally, this reminds me: there ought to be a specific term for the glee experienced when one’s post or comment is voted into the karma stratosphere.
I suggest “Yvainity.”
(Sorry, I couldn’t resist. No offense to Yvain, whose posts deserve their high ratings!)
Don’t forget karma inflation.
As more people join Less Wrong, more currency is also created, devaluing each unit of karma. This means that a post made a year ago that got 30 votes is getting more support than a post getting 30 votes today.
But at least we don’t have a central bank yet...
But surely this is offset somewhat by the possibility of new members voting on old posts.
Indeed, but in that regard not all old posts are created equal.
Some are highly linked from more recent posts, from the about page, from the wiki and sequences page. Other are orphans that will only be found by a very small fraction of recent members.
I bet someone could do a thesis on the economics of karma in online communities (maybe it has already been done, in fact—anyone has seen one?).
I think of 0-10 as Weak and 10-20 as Acceptable, but otherwise similarly to you.
It will not remain 0.
Daaaaaaaaaang.
(note to future observers: parent was at 0 when I posted this)
Would have been downvoted had I not included this prediction.
I will not predict that my prediction will be wrong. That would be silly.
I have a file in my home directory which includes my prediction, along with some extraneous text. md5 hash of this file: cc58112f13e9e92495782bac4a9443bc
Edit: wedrifid and AngryParsley have correctly informed me that md5 is broken. sha1 hash: 0cc4e8bd90a897c2f0d0c561780f69561b7af072
File.
Predicted 1, got 0.
Now that could have been an interesting one to bet on. All kinds of second guessing. Including calibrating the reward size compared to the overheads (ethical and practical) for cheating.
Were it practicable to prevent sabotage...
Just makes the prediction more difficult.
Was thinking on the other end, actually.
MD5? Colluder!
I get … a reference? All I know about MD5 is that (1) it shows up in the same UNIX man page as sha1, (2) it’s a command in Macintosh UNIX, and (3) it’s a cryptographic hash.
I’m really just ripping off AngryParsley.
MD5 isn’t very useful as a cryptographic hash these days. It’s not hard to find collisions for a given hash or create two plaintexts with the same hash. In fact, this has been used to create a rogue certificate authority. SHA-1 is looking pretty weak, but finding or constructing collisions with it is still infeasible.
To quote a certain BBC television presenter: oh, cock. Is there a better cryptographic hash than those two commonly available?
Edit: Let me clarify—of course they’re out there, but I was hoping to pick something which is very easy to find, install, and use among those interested. (Anything with a reputable website I could link to for Windows & UNIX downloads would be fine, I’m sure.)
Yup. There’s SHA-2 and some other algorithms. Right now the NIST is holding a contest for SHA-3. It’s narrowed down to 14 candidates. The winner will be announced in 2012.
ETA: Really though, unless you’re some super-spy, SHA-1 should be good enough until stronger hashes become common.
I don’t even have SHA-2 on my computer—SHA-1 hash added to original comment.
shasum should support all the algorithms:
Take a look at the shasum man pages for more parameters.
Unfortunately, I don’t have shasum on my MacBook—I had to use openssl.
Are you using an earlier version of OS X? I’m on 10.6 and it looks like shasum comes with 10.6.
Yes − 10.5.8. With drop damage, so it’s not practical or economical to invest in an upgrade. (Am considering the purchase of a netbook as a replacement sometime in the unspecified probably-more-than-a-year-from-now future.)
I already have a simple, accurate method for predicting the karma values of my comments. Agree with EY and get positive karma, or disagree with EY and get negative karma. More generally, support status quo and get positive karma, or reject status quo and get negative karma.
Yes, this community is really about nothing but slavish obedience; it’s not worth your time to contribute here.
He didn’t say anything about the community, just a method of predicting karma values for his comments. He didn’t make a value judgment or offer an explanation as to why that was. I’m pretty sure agreement with EY does correlate well with positive karma.
It’s an ADBOC thing. I’ve just learned there’s a technical term for what I want to say here: Implicature—there is a reason for mentioning this particular correlation besides its truth.
Connotations travel through value-judgments and through false dichotomies—for example, “These people do this, while good people have bad things happen to them.” You are juxtaposing two groups, and you are relying on the audience to figure out what dimension you’re showing two sides of. The latter half of that sentence says “Good people” and “Bad things,” so the first half must be “bad people” and “good things.”
The only value judgment I see in zero_call’s statement is the word “simple.”
I don’t find that that’s necessarily correct. For example, this post of mine expressing skepticism about cryonics or this one questioning a highly rated post were both fairly highly rated. I think needless contrarianism gets downvoted, but reasonable arguments generally don’t, even if they advance unpopular cases.
You’d be surprised at the amount of positive karma you can get from a well-phrased criticism of Eliezer, even ones that are relatively content-free. “I’ll believe you know what you’re talking about when you actually build an AI” is generally well-received.
A fine first-approximation, low-computation predictor! You can do better, though. For example, I’m pretty sure EY would agree that your method is simple and that it is fairly accurate (if nothing else, hopefully agreeing with EY or supporting the status quo has positive correlation with being right!). Which means your approximation predicts that your comment will get positive karma.
As well as the implied disapproval and disagreement in my downvote I will add that the reception your comment has received does not constitute any sort of meta-level proof of your point. You are being downvoted here for being wrong and being condescending. Posters can gain karma for disagreeing with EY, often even when their reasoning is poor.
Also—make a comment implying that people agree with EY because of groupthink, get downvoted. ;)
This post will get negative karma. Prove me wrong /LW/
In b4 “Bayesian proof or GTFO.”