Theory that Jimrandomh was talking about the other day, which I’m curious about:
Before social media, if you were a nerd on the internet, the way to get interaction and status was via message boards / forums. You’d post a thing, and get responses from other people who were filtered for being somewhat smart and confident enough to respond with a text comment.
Nowadays, generally most people post things on social media and then get much more quickly rewarded via reacts, based on a) a process that is more emotional than routed-through-verbal-centers, and b) you are get rewards from a wider swath of the populationl. Which means, in practice, you’re getting your incentive gradient from less thoughtful people, both due to the medium, and due to regression to the mean.
I’d previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts.
This new model adds a thing like “Maybe we actually just want reacts to be available to people with 1000+ karma or so. So, they increase the signal ratio from people who have demonstrated at least some reasonable threshold of thoughtfulness.” (This has the obvious downside of increasing groupthink, which I do take seriously, but there’s an unfortunate tradeoff between “increasing groupthink” and “getting your signal from random society which is pretty bad”, and I’d currently lean towards the former if I had to pick one. I do eventually want to get a filtering system that selects more directly on “thoughtfulness”, more reliably than the karma system does)
There is a trade-off: would you prefer higher-quality feedback with great chance of no feedback at all, or a greater probability of feedback which will most likely be lower-quality?
Maybe this is a problem with social media: sometimes we get a lot of feedback, and sometimes we get high-quality feedback, and it kinda makes us expect that it should be possible to get lots of high-quality feedback constantly. But that is not possible, so people are dissatisfied.
I don’t participate in a very wide swath of social media, so this may vary beyond FB and the like. But from what I can tell, reacts do exactly the opposite of what you say—they’re pure mood affiliation, with far less incentive nor opportunity for subtlety or epistemically-useful feedback than comments have.
The LW reacts you’ve discussed in the past (not like/laugh/cry/etc, but updated/good-data/clear-modeling or whatnot) probably DO give some opportunity, but can never be as subtle or clear as a comment. I wonder if something like Slack’s custom-reacts (any user can upload an icon and label it for use as a react) would be a good way to get both precision and ease. Or perhaps just a flag for “meta-comment”, which lets people write arbitrary text that’s a comment on the impact or style or whatnot, leaving non-flagged comments as object-level comments about the topic of the post or parent.
This isn’t intended at all to replace comments. The idea here is giving people accordance to do lower effort ‘pseudo comments’ that are somewhere in between an upvote / downvote and a comment, so that people who find it too effortful to write a comment can express some feedback.
Hypothesis is that this gets you more total feedback.
I was mostly reacting to “I’d previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts. ”, and failed my own first rule of evaluation: “compared to what?”.
As something with more variations than karma/votes, and less distracting/lower hurdle than comments, I can see reacts as filling a niche. I’d kind of lean toward more like tagging and less like 5-10 variations on a vote.
Theory that Jimrandomh was talking about the other day, which I’m curious about:
Before social media, if you were a nerd on the internet, the way to get interaction and status was via message boards / forums. You’d post a thing, and get responses from other people who were filtered for being somewhat smart and confident enough to respond with a text comment.
Nowadays, generally most people post things on social media and then get much more quickly rewarded via reacts, based on a) a process that is more emotional than routed-through-verbal-centers, and b) you are get rewards from a wider swath of the populationl. Which means, in practice, you’re getting your incentive gradient from less thoughtful people, both due to the medium, and due to regression to the mean.
This feeds a bit into my model of “Do we want reacts on LessWrong?”, and when/why reacts might be bad for society.
I’d previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts.
This new model adds a thing like “Maybe we actually just want reacts to be available to people with 1000+ karma or so. So, they increase the signal ratio from people who have demonstrated at least some reasonable threshold of thoughtfulness.” (This has the obvious downside of increasing groupthink, which I do take seriously, but there’s an unfortunate tradeoff between “increasing groupthink” and “getting your signal from random society which is pretty bad”, and I’d currently lean towards the former if I had to pick one. I do eventually want to get a filtering system that selects more directly on “thoughtfulness”, more reliably than the karma system does)
There is a trade-off: would you prefer higher-quality feedback with great chance of no feedback at all, or a greater probability of feedback which will most likely be lower-quality?
Maybe this is a problem with social media: sometimes we get a lot of feedback, and sometimes we get high-quality feedback, and it kinda makes us expect that it should be possible to get lots of high-quality feedback constantly. But that is not possible, so people are dissatisfied.
I don’t participate in a very wide swath of social media, so this may vary beyond FB and the like. But from what I can tell, reacts do exactly the opposite of what you say—they’re pure mood affiliation, with far less incentive nor opportunity for subtlety or epistemically-useful feedback than comments have.
The LW reacts you’ve discussed in the past (not like/laugh/cry/etc, but updated/good-data/clear-modeling or whatnot) probably DO give some opportunity, but can never be as subtle or clear as a comment. I wonder if something like Slack’s custom-reacts (any user can upload an icon and label it for use as a react) would be a good way to get both precision and ease. Or perhaps just a flag for “meta-comment”, which lets people write arbitrary text that’s a comment on the impact or style or whatnot, leaving non-flagged comments as object-level comments about the topic of the post or parent.
This isn’t intended at all to replace comments. The idea here is giving people accordance to do lower effort ‘pseudo comments’ that are somewhere in between an upvote / downvote and a comment, so that people who find it too effortful to write a comment can express some feedback.
Hypothesis is that this gets you more total feedback.
I was mostly reacting to “I’d previously talked about how it would be neat if LW reacts specifically gave people affordance to think subtler epistemically-useful thoughts. ”, and failed my own first rule of evaluation: “compared to what?”.
As something with more variations than karma/votes, and less distracting/lower hurdle than comments, I can see reacts as filling a niche. I’d kind of lean toward more like tagging and less like 5-10 variations on a vote.