“Perverse incentives” isn’t a LW catchphrase. It’s a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy—which sometimes works, but can also distort markets. Goodhart’s law is a special case. I’d assumed this was a ubiquitous enough concept that I wouldn’t have to explain it; my mistake.
In this case, we’ve got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you’re already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I’m right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.
There are a number of ways this could fail in practice: the question or answer space might be saturated, or people’s inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it’s sound.
This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that “perverse incentives” was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense.
In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
Also in my experience, questions are cheap if you’re already closely familiar with the source material, which most of the people posting in the MoR threads probably are.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
If I’m right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.
This only makes sense if people are rational agents. Given that you’ve already conceded that we irrationally undervalue good questions and questioners, doesn’t it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility?
I note the irony of asking questions here but I couldn’t manage to express my thoughts differently.
Isn’t [the difficulty of judging questions] the same with answers? I don’t see why it wouldn’t be.
If you come up with a good (or even convincing) answer, you’ve already front-loaded a lot of the analysis that people need to verify it. All you need to do is write it down—which is enough work that a lot of people don’t, but less than doing the analysis in the first place.
Isn’t [familiarity discounting for questions] the same with answers? I don’t see why it wouldn’t be.
It helps, but not as much. Patching holes takes more original thought than finding them.
This only makes sense if people are rational agents.
It makes sense if people respond to karma incentives. If they don’t, there’s no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn’t.
Given that you’ve already conceded that we irrationally undervalue good questions and questioners...
It makes sense if people respond to karma incentives. If they don’t, there’s no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn’t.
Actually, changing karma allocation norms could change visibility of unanswered questions judged interesting.
This can be an end in itself, or an indirect karma-related incentive.
“Perverse incentives” isn’t a LW catchphrase. It’s a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy—which sometimes works, but can also distort markets. Goodhart’s law is a special case. I’d assumed this was a ubiquitous enough concept that I wouldn’t have to explain it; my mistake.
In this case, we’ve got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you’re already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I’m right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.
There are a number of ways this could fail in practice: the question or answer space might be saturated, or people’s inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it’s sound.
This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that “perverse incentives” was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
This only makes sense if people are rational agents. Given that you’ve already conceded that we irrationally undervalue good questions and questioners, doesn’t it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility?
I note the irony of asking questions here but I couldn’t manage to express my thoughts differently.
If you come up with a good (or even convincing) answer, you’ve already front-loaded a lot of the analysis that people need to verify it. All you need to do is write it down—which is enough work that a lot of people don’t, but less than doing the analysis in the first place.
It helps, but not as much. Patching holes takes more original thought than finding them.
It makes sense if people respond to karma incentives. If they don’t, there’s no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn’t.
I didn’t say this.
Actually, changing karma allocation norms could change visibility of unanswered questions judged interesting.
This can be an end in itself, or an indirect karma-related incentive.