I already have a simple, accurate method for predicting the karma values of my comments. Agree with EY and get positive karma, or disagree with EY and get negative karma. More generally, support status quo and get positive karma, or reject status quo and get negative karma.
He didn’t say anything about the community, just a method of predicting karma values for his comments. He didn’t make a value judgment or offer an explanation as to why that was. I’m pretty sure agreement with EY does correlate well with positive karma.
It’s an ADBOC thing. I’ve just learned there’s a technical term for what I want to say here: Implicature—there is a reason for mentioning this particular correlation besides its truth.
Connotations travel through value-judgments and through false dichotomies—for example, “These people do this, while good people have bad things happen to them.” You are juxtaposing two groups, and you are relying on the audience to figure out what dimension you’re showing two sides of. The latter half of that sentence says “Good people” and “Bad things,” so the first half must be “bad people” and “good things.”
The only value judgment I see in zero_call’s statement is the word “simple.”
You’d be surprised at the amount of positive karma you can get from a well-phrased criticism of Eliezer, even ones that are relatively content-free. “I’ll believe you know what you’re talking about when you actually build an AI” is generally well-received.
A fine first-approximation, low-computation predictor! You can do better, though. For example, I’m pretty sure EY would agree that your method is simple and that it is fairly accurate (if nothing else, hopefully agreeing with EY or supporting the status quo has positive correlation with being right!). Which means your approximation predicts that your comment will get positive karma.
As well as the implied disapproval and disagreement in my downvote I will add that the reception your comment has received does not constitute any sort of meta-level proof of your point. You are being downvoted here for being wrong and being condescending. Posters can gain karma for disagreeing with EY, often even when their reasoning is poor.
I already have a simple, accurate method for predicting the karma values of my comments. Agree with EY and get positive karma, or disagree with EY and get negative karma. More generally, support status quo and get positive karma, or reject status quo and get negative karma.
Yes, this community is really about nothing but slavish obedience; it’s not worth your time to contribute here.
He didn’t say anything about the community, just a method of predicting karma values for his comments. He didn’t make a value judgment or offer an explanation as to why that was. I’m pretty sure agreement with EY does correlate well with positive karma.
It’s an ADBOC thing. I’ve just learned there’s a technical term for what I want to say here: Implicature—there is a reason for mentioning this particular correlation besides its truth.
Connotations travel through value-judgments and through false dichotomies—for example, “These people do this, while good people have bad things happen to them.” You are juxtaposing two groups, and you are relying on the audience to figure out what dimension you’re showing two sides of. The latter half of that sentence says “Good people” and “Bad things,” so the first half must be “bad people” and “good things.”
The only value judgment I see in zero_call’s statement is the word “simple.”
I don’t find that that’s necessarily correct. For example, this post of mine expressing skepticism about cryonics or this one questioning a highly rated post were both fairly highly rated. I think needless contrarianism gets downvoted, but reasonable arguments generally don’t, even if they advance unpopular cases.
You’d be surprised at the amount of positive karma you can get from a well-phrased criticism of Eliezer, even ones that are relatively content-free. “I’ll believe you know what you’re talking about when you actually build an AI” is generally well-received.
A fine first-approximation, low-computation predictor! You can do better, though. For example, I’m pretty sure EY would agree that your method is simple and that it is fairly accurate (if nothing else, hopefully agreeing with EY or supporting the status quo has positive correlation with being right!). Which means your approximation predicts that your comment will get positive karma.
As well as the implied disapproval and disagreement in my downvote I will add that the reception your comment has received does not constitute any sort of meta-level proof of your point. You are being downvoted here for being wrong and being condescending. Posters can gain karma for disagreeing with EY, often even when their reasoning is poor.
Also—make a comment implying that people agree with EY because of groupthink, get downvoted. ;)