I think our (effectively) requiring comments is better than what you’re proposing.
I don’t think I’ve published a post other than link posts, but even with my ‘poster’ hat on, I’d (personally) much prefer engagement and discussion than a simple ‘self reported understanding’ count. I measure understanding relative to engagement and would estimate it based on the specific and particular details in comments, e.g. whether several users have pointed out that something was confusing; what expected, or surprising, connections to do others make; whether the arguments for or about, and summaries or paraphrasing of my post match my own understanding of the topic.
I wouldn’t trust a simple count of the number of users that report ‘understanding’ a post and thus I wouldn’t find it to be particularly valuable.
But I agree with both of your last points – your proposal very well might result in more feedback and these metrics would be trivially accessible versus manually interpreting some number of text comments.
I’d prefer that LessWrong remain as-is in this way.
But I think you could implement this yourself with external survey tools – and I’d be very interested in reading about any experiments along those lines!
ex.: if you can see that X% of people understood your post, then it gives you an idea of how understandable it was
I predict much more people would say whether they understood the post if they could do so with a react rather than a comment
plus, compiling comments to have a broad overview takes a long time
I think our (effectively) requiring comments is better than what you’re proposing.
I don’t think I’ve published a post other than link posts, but even with my ‘poster’ hat on, I’d (personally) much prefer engagement and discussion than a simple ‘self reported understanding’ count. I measure understanding relative to engagement and would estimate it based on the specific and particular details in comments, e.g. whether several users have pointed out that something was confusing; what expected, or surprising, connections to do others make; whether the arguments for or about, and summaries or paraphrasing of my post match my own understanding of the topic.
I wouldn’t trust a simple count of the number of users that report ‘understanding’ a post and thus I wouldn’t find it to be particularly valuable.
But I agree with both of your last points – your proposal very well might result in more feedback and these metrics would be trivially accessible versus manually interpreting some number of text comments.
I’d prefer that LessWrong remain as-is in this way.
But I think you could implement this yourself with external survey tools – and I’d be very interested in reading about any experiments along those lines!