Those are just thoughts; I haven’t thought a lot about it.
Possibly, some of those ways would only be available in some posts, depending on whether the poster selected them.
Some that seem valuable:
if a post will change your actions
your credence on a post’s central claim / a binary value of whether you agree with it
whether you updated / changed your mind from reading the post
how well (or whether) you understood the post
I recently thought about this again because I wanted to know if people would change how they use the term “zero sum” after reading Zero sum is a misnomer.
I also like the suggestions from other people I’ve shared as other answers.
But what’s the benefit of having a small set of ‘standard reacts’ instead of allowing/requiring that users express their thoughts in feelings in text?
This site in particular might be better because we don’t provide low-effort options for providing (effectively very-low-information) feedback.
Are you looking for ‘cheap survey data’ for your own posts here? Maybe you could just setup surveys for all of your posts with the info you’d like people to provide instead.
But what’s the benefit of having a small set of ‘standard reacts’ instead of allowing/requiring that users express their thoughts in feelings in text?
Would you rather see a dozen replies to a post that simply say “Updated.”? Neither does anyone else, so they don’t, which means we’re deprived of the useful feedback. Maybe one person replies “Updated.” and they get the karma for being first instead of (or in addition to) the original. That doesn’t seem fair. Or maybe they get downvoted for a reply that was too short, even if a lot of people agree. With an “updated” react, this just works.
This site in particular might be better because we don’t provide low-effort options for providing (effectively very-low-information) feedback.
I kind of think this is a good point, but there are tradeoffs. We want to lower the bar to providing feedback so we can get more feedback, but not so much that we disincentivize the discourse. I feel this tradeoff is probably worth it, but that’s a guess. If someone has something they’re willing to say, how often is a react going to prevent it? They’d probably do both, like how often people will downvote and also explain why.
I don’t mind any number of people replying “Updated”. So, yes, I would prefer that over a count of some small number of ‘standard reactions’.
But, as I suggested in another comment on this post, external survey tools could be easily used to gather this data or feedback if you or anyone else really think it’s valuable or useful.
I would like to see some evidence about how well those work and how useful that gathered data is before I change my mind about this being useful here.
We want to lower the bar to providing feedback so we can get more feedback
I don’t want this as I don’t think ‘more feedback’ is particularly useful, valuable, or germane to this site.
I’m also thinking about this request in terms of additional work, and ongoing maintenance, by the site’s developers/maintainers.
I’m also unclear why anyone would want to (seemingly) optimize for such incredibly low-density info as the count of ‘reacts’ on posts. We are mostly – at times explicitly – trying to avoid persuading each other and instead focus on sharing our (detailed) thoughts and feelings so that we can, as a group, reason better. This all seems exactly backwards given that.
I think our (effectively) requiring comments is better than what you’re proposing.
I don’t think I’ve published a post other than link posts, but even with my ‘poster’ hat on, I’d (personally) much prefer engagement and discussion than a simple ‘self reported understanding’ count. I measure understanding relative to engagement and would estimate it based on the specific and particular details in comments, e.g. whether several users have pointed out that something was confusing; what expected, or surprising, connections to do others make; whether the arguments for or about, and summaries or paraphrasing of my post match my own understanding of the topic.
I wouldn’t trust a simple count of the number of users that report ‘understanding’ a post and thus I wouldn’t find it to be particularly valuable.
But I agree with both of your last points – your proposal very well might result in more feedback and these metrics would be trivially accessible versus manually interpreting some number of text comments.
I’d prefer that LessWrong remain as-is in this way.
But I think you could implement this yourself with external survey tools – and I’d be very interested in reading about any experiments along those lines!
Those are just thoughts; I haven’t thought a lot about it.
Possibly, some of those ways would only be available in some posts, depending on whether the poster selected them.
Some that seem valuable:
if a post will change your actions
your credence on a post’s central claim / a binary value of whether you agree with it
whether you updated / changed your mind from reading the post
how well (or whether) you understood the post
I recently thought about this again because I wanted to know if people would change how they use the term “zero sum” after reading Zero sum is a misnomer.
I also like the suggestions from other people I’ve shared as other answers.
But what’s the benefit of having a small set of ‘standard reacts’ instead of allowing/requiring that users express their thoughts in feelings in text?
This site in particular might be better because we don’t provide low-effort options for providing (effectively very-low-information) feedback.
Are you looking for ‘cheap survey data’ for your own posts here? Maybe you could just setup surveys for all of your posts with the info you’d like people to provide instead.
Would you rather see a dozen replies to a post that simply say “Updated.”? Neither does anyone else, so they don’t, which means we’re deprived of the useful feedback. Maybe one person replies “Updated.” and they get the karma for being first instead of (or in addition to) the original. That doesn’t seem fair. Or maybe they get downvoted for a reply that was too short, even if a lot of people agree. With an “updated” react, this just works.
I kind of think this is a good point, but there are tradeoffs. We want to lower the bar to providing feedback so we can get more feedback, but not so much that we disincentivize the discourse. I feel this tradeoff is probably worth it, but that’s a guess. If someone has something they’re willing to say, how often is a react going to prevent it? They’d probably do both, like how often people will downvote and also explain why.
I don’t mind any number of people replying “Updated”. So, yes, I would prefer that over a count of some small number of ‘standard reactions’.
But, as I suggested in another comment on this post, external survey tools could be easily used to gather this data or feedback if you or anyone else really think it’s valuable or useful.
I would like to see some evidence about how well those work and how useful that gathered data is before I change my mind about this being useful here.
I don’t want this as I don’t think ‘more feedback’ is particularly useful, valuable, or germane to this site.
I’m also thinking about this request in terms of additional work, and ongoing maintenance, by the site’s developers/maintainers.
I’m also unclear why anyone would want to (seemingly) optimize for such incredibly low-density info as the count of ‘reacts’ on posts. We are mostly – at times explicitly – trying to avoid persuading each other and instead focus on sharing our (detailed) thoughts and feelings so that we can, as a group, reason better. This all seems exactly backwards given that.
ex.: if you can see that X% of people understood your post, then it gives you an idea of how understandable it was
I predict much more people would say whether they understood the post if they could do so with a react rather than a comment
plus, compiling comments to have a broad overview takes a long time
I think our (effectively) requiring comments is better than what you’re proposing.
I don’t think I’ve published a post other than link posts, but even with my ‘poster’ hat on, I’d (personally) much prefer engagement and discussion than a simple ‘self reported understanding’ count. I measure understanding relative to engagement and would estimate it based on the specific and particular details in comments, e.g. whether several users have pointed out that something was confusing; what expected, or surprising, connections to do others make; whether the arguments for or about, and summaries or paraphrasing of my post match my own understanding of the topic.
I wouldn’t trust a simple count of the number of users that report ‘understanding’ a post and thus I wouldn’t find it to be particularly valuable.
But I agree with both of your last points – your proposal very well might result in more feedback and these metrics would be trivially accessible versus manually interpreting some number of text comments.
I’d prefer that LessWrong remain as-is in this way.
But I think you could implement this yourself with external survey tools – and I’d be very interested in reading about any experiments along those lines!