As a counterpoint, my highest rated comments are huge walls of text. This could be because (a) I don’t make a lot of jokes or (b) I make crappy jokes or (c) people like my walls of text more than the typical wall of text or (d) something else.
I keep an eye on my karma and have noticed these things that I believe are related to your post:
Talking about the karma system has fallen out of favor. I think people are getting tired of it.
Asking why something was downvoted usually brings more upvotes unless you really, really deserved the downvotes. This latter case will probably be swarmed with downvotes.
Some (many?) people vote with an end score in mind. These meta-voters have more affect on threshold comments that fluctuate between −2 and 2.
Long conversations will generally pull between −1 and +2 per post. Most of my karma comes from lengthy discussions in the comments. Even if the top level post was only rated +2 I will net almost 100 karma points from the post and comments. (Note: I haven’t actually added this up. It may be closer to 75 karma.)
Quick responses pointing out third alternatives or simple problems get upvoted and usually roam between +2 and +14. If you want to get your karma higher, this is the easiest way. Comment immediately after a post is submitted and point out the most obvious flaw respectfully and concisely. Don’t try to make a point, just note an error. If you get in before the rest of us, you will probably get +4 or higher.
Long responses pointing out serious problems get upvoted but have a lower chance of pulling large amounts of karma than quick responses. However, after the first wave of quick responses, only the longer comments are true candidates for higher karma. I suspect that a long response to a quick comment has a good chance to do well, but haven’t really watched those comments yet.
Jokes are upvoted when they are either extremely funny or solidly funny and on topic. Randomness is upvoted if it is an inside joke, otherwise stays around +0. Sarcasm is appreciated but has the problem of being mistaken for nonsarcasm.
Extending the point or conversation of a top-level post gets upvoted. Most of mine get between +2 and +4. Examples would be almost every comment I have made while reading the sequences.
Aggressiveness is generally poorly received on technical topics. Aggressiveness is easier to get away with when dealing with fuzzy topics. I attribute this mostly to margin of error. Technical topics are harder to be bulletproof. I am still having a hard time predicting which non-technical top level posts are voted up. I suspect this is because I don’t know what is already been discussed or that the issue is a technical topic that I have misidentified.
Self-depreciation is a huge karma pull. Both my highest rated post and comment were essentially me slamming myself over and over. Each were voted higher than +20.
Bullet point lists of extensions, ideas, questions, and so on seem to do about as well or better than long paragraphs of text. The walls of text may be harder to skim for goodies or the bullet point list better organized?
Non-aggressive requests for clarification or information are not generally downvoted. Mine seem to roam between 0 and +2 karma. If the question and response delve into a lengthy but friendly conversation, I seem to get between 0 and +2 karma for each of my comments. If a solid agreement or conclusion is reached, the capping comment gets about double whatever the individual comments were getting.
Posting nearby “famous” people amps up the karma action. Replying to EY, Alicorn, Vladimir_Nesov, pjeby, et al will increase the amount of people that read your comments. The reasons for this are varied. The four I used here are just names that popped into my head. Also, some people seem to vote up conversations they are in while others do not. A few downvote anyone disagreeing with them. The people that matter generally fit these criteria: (a) top contributer (b) easily recognizable name (c) frequent poster/commenter (d) abnormal amount of recent activity (e) holds atypical beliefs for the community (f) a troll.
Better grammar, spelling, and language increases the likelihood that your comment will move upward faster.
Comments quickly upvoted higher than typical seem to either (a) go through the roof or (b) get meta-voted back to between +1 and +3.
Telling people to vote in a particular way tends to produce easy to predict results but not in a manner that is easy to describe.
And oh wow did that get long. Do note that this is all being typed from the top of my head using myself and the comments I read as an example. Naturally, the above does not dictate how people vote.
EDIT: I guess for fun, I predict that this comment gets ROT13orgjrra cbfvgvir gjb naq cyhf fvk xneznROT13.
EDIT 2: I would normally downvote a post such as this but elsewhere in the comments you seemed to have received the message and was wondering about deleting it. So I just left it as it is. Also relevant: I generally do not upvote jokes unless they are truly amazing.
What is the point of this post? I seem to have missed it entirely. Can anyone help me out?
Is the point that predicting the end result of particular criterion is difficult because bias gets in the way? And, because it is difficult, start small with stuff like gene fitness and work up to bigger problems like social ethics?
Or… is the point that natural selection is a great way to expose the biases at work in our ethics choice criterion?
I am not tracking on something here. This is a summary of the points in the post as I see them:
We are unable to accurately study how closely the results of our actions match our own predictions of those results.
The equivalent problem in decision theory is that we are unable to take a set of known choice criteria and predict which choice will be made given a particular environment. In other words, we think we know what we would/should do in event X but we are wrong.
We possess the ability to predict any particular action from all possible choice criteria.
Is it possible to prove that a particular action does or does not follow from certain choice criteria, thereby avoiding our tendency to predict anything from everything?
We need a bias free system to study that allows us to measure our predictions without interfering with the result of the system.
Natural selection presents a system whose only “goal” is inclusive genetic fitness. There is no bias.
Examples show that our predictions of natural selection reveal biases in ourselves. Therefore, our predictions were biased.
To remove our bias with regards to human ethics, we should use natural selection as a calibration tool.
I feel like the last point skipped over a few points. As best as I can tell, these belong just before the last point:
When our predictions of the bias-proof system are accurate, they will be predictions without bias.
Using the non-biased predictors we found to study the bias-proof system, we can study other systems with less bias.
Using this outline, it seems like the takeaway is, “Don’t study ethics until after you studied natural selection because there is too much bias involved in studying ethics.”
Can someone tell me if I am correct? A simple yes or no is cool if you don’t feel like typing up a whole lot. Even, “No, not even close,” will give me more information than I have right now.