Negative karma is already less common than positive karma. This is good, since it would be bad if the “average user” couldn’t post. But without a justified target for what the proper amount of negative karma is, setting “reduce negative karma” as a goal isn’t reasonable. How do we know we don’t already have the right amount? Or too little?
There is an underlying assumption of what “negative karma” means. It hopefully means things like:
this post is wrong
this is a badly expressed opinion
and various other reasons for downvote. If we assume that enough downvotes means we are not effectively communicating useful thoughts, then we want to minimise that. Of course this may not be representative if say; our number of active accounts doubles in size; we should expect more negative karma as a factor of the number of people in the conversation.
As we know; what get’s measured get’s optimised for—I want to keep the measuring options various and several so that we optimise in the general direction of less bad stuff more good stuff.
“less negative karma” is a bad goal. It’s trivially reachable by eliminating downvotes, for example.
it’s a terrible goal on it’s own, it’s definitely not to be taken on it’s own. If not taken to the extreme of aiming for zero downvotes or something stupid like that; I think it represents a (maybe bad) measure of how disagreeable we are.
If you think it’s completely unrepresentative I will take it out; I was of the opinion that it can show something; and is worth checking up on, probably not optimising for.
Yes, I think that’s quite right: the amount of negative karma might be a useful indicator (together with other indicators), but it’s not a good target for optimization.
No, it’s even simpler than that. Think about using salt in cooking—if you produce an oversalted dish that’s a problem that you should notice and fix, but talking about minimizing the amount of salt is silly (I’m talking gastronomically, not nutritionally).
I think there are two separate things going on here.
It might (at present) be beneficial to reduce X, but the optimal level might not be zero.
Treating X as a target for optimization might be harmful.
(Here X is “amount of salt” for your oversalted dish, and “amount of downvoting” for present-day LW.)
Addressing the alleged “too much negative karma” problem by prohibiting downvotes would be bad in both respects. But whatever target we might pick, aiming for exactly that level of downvoting and optimizing would likely give bad results, whereas picking a target level of saltiness in your dish and optimizing might work just fine.
whereas picking a target level of saltiness in your dish and optimizing might work just fine.
The point is that you optimize for taste and let saltiness fall where it may. Similarly, LW should optimize for some metric of “goodness” and let negative karma be whatever it has to be to produce that deliciousness.
Of course. But that may be ill-specified and hard to measure, and something else may be a usable proxy.
Your (perfectly correct) point is that optimizing a poorly chosen proxy (e.g., minimizing the amount of salt) can produce very poor results. My point is that even if you have what looks like an excellently chosen proxy, as soon as you start optimizing it your (or others’) ingenuity is liable to turn up ways to improve it while making what you care about worse.
(None the less, proxy measurements are really useful. I believe we are agreed that at the very least they’re worth keeping an eye on as a rough guide, provided you also keep an eye on whether they’re ceasing to be useful proxies.)
Re. “Reducing total negative karma” as a goal--
Negative karma is already less common than positive karma. This is good, since it would be bad if the “average user” couldn’t post. But without a justified target for what the proper amount of negative karma is, setting “reduce negative karma” as a goal isn’t reasonable. How do we know we don’t already have the right amount? Or too little?
There is an underlying assumption of what “negative karma” means. It hopefully means things like:
this post is wrong
this is a badly expressed opinion
and various other reasons for downvote. If we assume that enough downvotes means we are not effectively communicating useful thoughts, then we want to minimise that. Of course this may not be representative if say; our number of active accounts doubles in size; we should expect more negative karma as a factor of the number of people in the conversation.
As we know; what get’s measured get’s optimised for—I want to keep the measuring options various and several so that we optimise in the general direction of less bad stuff more good stuff.
But if we assume that enough downvotes means we are effectively filtering out the stupid stuff, then we want to maximize that.
I agree with PhilGoetz that “less negative karma” is a bad goal. It’s trivially reachable by eliminating downvotes, for example.
it’s a terrible goal on it’s own, it’s definitely not to be taken on it’s own. If not taken to the extreme of aiming for zero downvotes or something stupid like that; I think it represents a (maybe bad) measure of how disagreeable we are.
If you think it’s completely unrepresentative I will take it out; I was of the opinion that it can show something; and is worth checking up on, probably not optimising for.
Yes, I think that’s quite right: the amount of negative karma might be a useful indicator (together with other indicators), but it’s not a good target for optimization.
This is not an unusual phenomenon.
No, it’s even simpler than that. Think about using salt in cooking—if you produce an oversalted dish that’s a problem that you should notice and fix, but talking about minimizing the amount of salt is silly (I’m talking gastronomically, not nutritionally).
I think there are two separate things going on here.
It might (at present) be beneficial to reduce X, but the optimal level might not be zero.
Treating X as a target for optimization might be harmful.
(Here X is “amount of salt” for your oversalted dish, and “amount of downvoting” for present-day LW.)
Addressing the alleged “too much negative karma” problem by prohibiting downvotes would be bad in both respects. But whatever target we might pick, aiming for exactly that level of downvoting and optimizing would likely give bad results, whereas picking a target level of saltiness in your dish and optimizing might work just fine.
The point is that you optimize for taste and let saltiness fall where it may. Similarly, LW should optimize for some metric of “goodness” and let negative karma be whatever it has to be to produce that deliciousness.
Of course. But that may be ill-specified and hard to measure, and something else may be a usable proxy.
Your (perfectly correct) point is that optimizing a poorly chosen proxy (e.g., minimizing the amount of salt) can produce very poor results. My point is that even if you have what looks like an excellently chosen proxy, as soon as you start optimizing it your (or others’) ingenuity is liable to turn up ways to improve it while making what you care about worse.
(None the less, proxy measurements are really useful. I believe we are agreed that at the very least they’re worth keeping an eye on as a rough guide, provided you also keep an eye on whether they’re ceasing to be useful proxies.)
That, however, is not the case here.
I agree. (Did you expect me not to? If so, I apologize for anything misleading in what I wrote.)