Tangentially related: I was in the HPMOR thread and noticed that there’s a strong tendency to reward good answers but only a weak tendency to reward good questions. The questions are actually more important than the answers because they’re a prerequisite to the answers, but they don’t seem to be being treated as such. They have roughly half as much reputation as the popular answers do, which seems unfair.
I would guess that this extends to the rest of the site as well, as it’s a fairly common thing that humans do. Things would probably be better here if we tried to change that. As a rough rule of thumb, we should more or less make it our general personal policies to upvote a question if the question itself is not stupid and the question results in an answer that is insightful and deserves an upvote.
I tried to not use “we” in this comment but then it was grammatically incoherent and it wasn’t worth the effort of fixing it.
Disagree. Insightful-sounding questions are much much easier to come up with than genuinely insightful answers, so despite the fact that the former is a prerequisite to the latter, rewarding them equally would provide perverse incentives.
At least, that’s true if our goal is to maximize the number of insightful results we generate—which seems like a pretty reasonable assumption to me.
You cheated. You’re comparing “insightful-sounding questions” to “genuinely insightful answers”. Of course the genuine answers are going to come out ahead. That’s completely unfair to the suggestion. But, assuming that people on LessWrong actually have the ability to distinguish between insightful-sounding questions and genuinely insightful questions (which seems just as easy as distinguishing between insightful-sounding answers and genuinely insightful answers, btw) the proposal makes sense.
Your comment does not contain an argument. It contains a blatantly flawed framing of the proposal I put forward and a catchphrase, “perverse incentives”, and you don’t explain the thought that goes into that catchphrase. You never articulate what the actual impact of these perverse incentives would look like, or how these perverse incentives would arise. Do you anticipate that if more people upvoted questions we would end up with fewer good results? I do not see how such an outcome would occur. I see zero reason to believe the “perverse incentives” you reference would originate.
There’s a huge tendency within academia to ignore anything with partial solutions or doubts or blank spaces, and to undervalue questioning. Questions are inherently low status because they explicitly reveal a large gap of knowledge that cannot easily be overcome by the asker, and also have an element of submission to the “more intelligent” person who will answer the question. My suggestion is designed to counterbalance that. The best way to maximize the number of insightful thoughts and results you have is to ask insightful questions, that seems like a very reasonable assumption to me.
Moreover, putting forth the question which took place at an earlier point in the thought process allows others to more easily understand whatever conclusions you may or may not reach. It also allows people to take that question along different avenues of thought to reach useful conclusions that you would not have even considered.
Now, clearly we don’t want to ask questions for the sake of asking questions. But good questions are extremely important and should be encouraged. Upvoting more questions than usual and asking more questions as a general rule is therefore a good idea. The proposal can be selectively applied by the intelligent commenters of LessWrong, and none of the “perverse incentives” you envision will occur or do any damage to the site.
“Perverse incentives” isn’t a LW catchphrase. It’s a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy—which sometimes works, but can also distort markets. Goodhart’s law is a special case. I’d assumed this was a ubiquitous enough concept that I wouldn’t have to explain it; my mistake.
In this case, we’ve got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you’re already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I’m right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.
There are a number of ways this could fail in practice: the question or answer space might be saturated, or people’s inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it’s sound.
This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that “perverse incentives” was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense.
In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
Also in my experience, questions are cheap if you’re already closely familiar with the source material, which most of the people posting in the MoR threads probably are.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
If I’m right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.
This only makes sense if people are rational agents. Given that you’ve already conceded that we irrationally undervalue good questions and questioners, doesn’t it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility?
I note the irony of asking questions here but I couldn’t manage to express my thoughts differently.
Isn’t [the difficulty of judging questions] the same with answers? I don’t see why it wouldn’t be.
If you come up with a good (or even convincing) answer, you’ve already front-loaded a lot of the analysis that people need to verify it. All you need to do is write it down—which is enough work that a lot of people don’t, but less than doing the analysis in the first place.
Isn’t [familiarity discounting for questions] the same with answers? I don’t see why it wouldn’t be.
It helps, but not as much. Patching holes takes more original thought than finding them.
This only makes sense if people are rational agents.
It makes sense if people respond to karma incentives. If they don’t, there’s no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn’t.
Given that you’ve already conceded that we irrationally undervalue good questions and questioners...
It makes sense if people respond to karma incentives. If they don’t, there’s no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn’t.
Actually, changing karma allocation norms could change visibility of unanswered questions judged interesting.
This can be an end in itself, or an indirect karma-related incentive.
Tangentially related: I was in the HPMOR thread and noticed that there’s a strong tendency to reward good answers but only a weak tendency to reward good questions. The questions are actually more important than the answers because they’re a prerequisite to the answers, but they don’t seem to be being treated as such. They have roughly half as much reputation as the popular answers do, which seems unfair.
I would guess that this extends to the rest of the site as well, as it’s a fairly common thing that humans do. Things would probably be better here if we tried to change that. As a rough rule of thumb, we should more or less make it our general personal policies to upvote a question if the question itself is not stupid and the question results in an answer that is insightful and deserves an upvote.
I tried to not use “we” in this comment but then it was grammatically incoherent and it wasn’t worth the effort of fixing it.
Disagree. Insightful-sounding questions are much much easier to come up with than genuinely insightful answers, so despite the fact that the former is a prerequisite to the latter, rewarding them equally would provide perverse incentives.
At least, that’s true if our goal is to maximize the number of insightful results we generate—which seems like a pretty reasonable assumption to me.
You cheated. You’re comparing “insightful-sounding questions” to “genuinely insightful answers”. Of course the genuine answers are going to come out ahead. That’s completely unfair to the suggestion. But, assuming that people on LessWrong actually have the ability to distinguish between insightful-sounding questions and genuinely insightful questions (which seems just as easy as distinguishing between insightful-sounding answers and genuinely insightful answers, btw) the proposal makes sense.
Your comment does not contain an argument. It contains a blatantly flawed framing of the proposal I put forward and a catchphrase, “perverse incentives”, and you don’t explain the thought that goes into that catchphrase. You never articulate what the actual impact of these perverse incentives would look like, or how these perverse incentives would arise. Do you anticipate that if more people upvoted questions we would end up with fewer good results? I do not see how such an outcome would occur. I see zero reason to believe the “perverse incentives” you reference would originate.
There’s a huge tendency within academia to ignore anything with partial solutions or doubts or blank spaces, and to undervalue questioning. Questions are inherently low status because they explicitly reveal a large gap of knowledge that cannot easily be overcome by the asker, and also have an element of submission to the “more intelligent” person who will answer the question. My suggestion is designed to counterbalance that. The best way to maximize the number of insightful thoughts and results you have is to ask insightful questions, that seems like a very reasonable assumption to me.
Moreover, putting forth the question which took place at an earlier point in the thought process allows others to more easily understand whatever conclusions you may or may not reach. It also allows people to take that question along different avenues of thought to reach useful conclusions that you would not have even considered.
Now, clearly we don’t want to ask questions for the sake of asking questions. But good questions are extremely important and should be encouraged. Upvoting more questions than usual and asking more questions as a general rule is therefore a good idea. The proposal can be selectively applied by the intelligent commenters of LessWrong, and none of the “perverse incentives” you envision will occur or do any damage to the site.
“Perverse incentives” isn’t a LW catchphrase. It’s a term from economics, used to describe situations where external changes in the incentive structure around some good you want to maximize actually end up maximizing something else at its expense. This often happens when the thing you wanted to maximize is hard to quantify or has a lot of prerequisites, making it easier to encourage things by proxy—which sometimes works, but can also distort markets. Goodhart’s law is a special case. I’d assumed this was a ubiquitous enough concept that I wouldn’t have to explain it; my mistake.
In this case, we’ve got an incentive (karma) and a goal to maximize (insightful results, which require both a question and a promising answer to it). In my experience, which you evidently disagree with, judging the fruitfulness of questions (other than the trivial or obviously flawed) is difficult without putting effort into analyzing them: effort which is unproductive if expended on a dead-end question. Also in my experience, questions are cheap if you’re already closely familiar with the source material, which most of the people posting in the MoR threads probably are. If I’m right about both of these points, valuing insightful-sounding questions on par with insightful-sounding answers creates a karma disincentive to spend time in analysis of open questions (you could spend the same time writing up new questions and be rewarded more), and a proportionally lower number of results.
There are a number of ways this could fail in practice: the question or answer space might be saturated, or people’s inclinations in this area might be insensitive to karma (in which cases no amount of incentives either way would help). One of the premises could be wrong. But as marginal reasoning, it’s sound.
This is all reasoning that should have been made explicit in your comment. Your objection has good thoughts going into it but I had no way of knowing that from your previous comment. I knew that “perverse incentives” was an economic catchphrase but thought you were just referencing it without reason because you made no attempt to describe why the perverse incentives would arise and why the LessWrong commenters would have a difficult time distinguishing intelligent questions from dumb questions. I thought you were treating the economic catchphrase like phlogiston. If your above thought process had been described in your comment it would have made much more sense.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
Isn’t this the same with answers? I don’t see why it wouldn’t be.
This only makes sense if people are rational agents. Given that you’ve already conceded that we irrationally undervalue good questions and questioners, doesn’t it make more sense that actively trying to be kinder to questioners would return the question/answer market to its objective equilibria, thus maximizing utility?
I note the irony of asking questions here but I couldn’t manage to express my thoughts differently.
If you come up with a good (or even convincing) answer, you’ve already front-loaded a lot of the analysis that people need to verify it. All you need to do is write it down—which is enough work that a lot of people don’t, but less than doing the analysis in the first place.
It helps, but not as much. Patching holes takes more original thought than finding them.
It makes sense if people respond to karma incentives. If they don’t, there’s no point in trying to change karma allocation norms. The magnitude of the incentive does change depending on how people view the pursuits involved, but the direction doesn’t.
I didn’t say this.
Actually, changing karma allocation norms could change visibility of unanswered questions judged interesting.
This can be an end in itself, or an indirect karma-related incentive.