Why on Earth do people keep saying this? Sending out a party invite via email is a technical solution to a social problem, and it’s great! For God’s sake, taking the train to see a friend is a technical solution to a social problem. This phrase seems to have gained currency through repetition despite being trivially, obviously false on the face of it.
Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google’s initial success. Why would you expect technology not to be helpful here?
Ok, those are pretty good examples. Though none of them are quite complete in the sense that there’s still a bunch of human messiness with circumvention and countermeasures involved. Burglar alarms need human security personnel to back up the threat, voting is being gamed with gerrymandering and who knows what, and PageRank is probably in a constant arms race between SEO operators and Google engineers tweaking the system. They don’t work in a way where you just drop in the tech and go to sleep and have the tech solve the social conflict, though they obviously help managing the conflict, possibly in a very large degree.
The idea with discussion forums, where people spout the epigram, often seems to be that the technical solution would just tick away without human supervision and solve the social conflict. Stuff that does that is extremely hard. Stuff that’s more a tool than a complete system will need a police department or a Google or full-time discussion forum moderators to do the actual work while being helped by the tool.
Modern Bayesian spam filters are another example of a well-working technical solution to a social conflict though. Don’t know how much of an arms race something like Gmail’s filter is. This is something that is giving me the vibe of a standalone system actually solving the problem, even more than PageRank, though I don’t know the inner details of either very well.
When I hear people say “you’re proposing a technical improvement to a social problem”, they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that’s the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. “This improvement you’re proposing may be open to even greater improvement in the future!” doesn’t seem like a counter argument.
In many instances, the technology doesn’t directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That’s what we’re trying to do here.
I mostly agree with you that the statement against technical solutions is false on the face of it.
How about this: if you want to prevent certain types of discussion and interaction in an online community, the members need to have some kind of consensus against it (the “social” part of the solution). Otherwise technical measures will either be worked around (if plenty of communication can still happen) or the community will be damaged (if communication is blocked enough to achieve the stated aim).
Technical measures can change the required amount of consensus needed from complete unanimity to something more achievable.
In our case, we may not have had the required amount of consensus against feeding trolls, or of what counts as a troll to avoid feeding.
Because this involves conflict of interest, it is a security issue, and people aren’t very good at thinking about those. Often they fail to take the basic step of asking “if I were the attacker, how would I respond to this?”. See Inside the twisted mind of the security professional.
When you think of discussion forum design as a security issue, determining just what should be considered an attack can get pretty tricky. Trying to hack other people’s passwords, sure. Open spamming and verbal abuse in messages, most likely. Deliberate trolling, probably, but how easy is it to tell what the intent of a message was? Formalizing “good faith discussion” isn’t easy. What about people sincerely posting nothing but “rationalist lolcat” macro pictures on the front page and other people sincerely upvoting them? Is a clueless commenter a 14-year-old who is willing to learn forum conventions and is a bit too eager to post in the meantime, or a 57-year-old who would like to engage you in a learned debate to show you the error of your ways of thought and then present you the obvious truth of the Space Tetrahedron Theory of Everything?
Basically that discussion forum failure modes seem to be very complex compared to what an autonomous technical system can handle, and the discussion on improving LW seems to often skirt around the role of human moderators in favor of trying to make a forum work with simple autonomous mechanisms.
What does the phrase normally mean? Risto_Saarelma had one go in reply to me at restricting it to the relevant domain but that didn’t work. Could you describe what the phrase does normally mean? I’m not asking for a perfect, precise definition, but just a pointer to a cluster of correlations that it identifies.
I think it’s a misidentification of the reason why a certain class of proposed solutions to social problems do not work. The class consists of solutions which fail to take into account that people will change their behaviour as necessary to achieve whatever their purposes are, and will simply step around any easily-avoided obstacles that may be placed in their way.
The famous picture that Bruce Schneier once posted as an allegory of useless security measures, of car tracks in the snow going around barriers across the road, is an excellent example. “The Internet routes around censorship” is another.
“I know when I see it”, but I’d say that e-mail and trains enable people to do what they want to do (namely, communicate and travel), whereas the prototypical “technical solutions for social problems” try to discourage people from doing what they want to do (e.g. the Prohibitionism).
This seems intuitively likely, and is likely true in many cases. In the end, if you don’t have good commenters, there may not be much to be done about it on a technical level. However, it’s not obvious to me it applies here. For example, the entire karma system is a technical solution that seems to be, if not ideal, better than nothing in dealing with the social problem of filtering content on this site and Reddit.
“Technical solutions for social problems almost never work.”
Why on Earth do people keep saying this? Sending out a party invite via email is a technical solution to a social problem, and it’s great! For God’s sake, taking the train to see a friend is a technical solution to a social problem. This phrase seems to have gained currency through repetition despite being trivially, obviously false on the face of it.
How about if you substitute “nontrivial social conflict” with “social problem”?
Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google’s initial success. Why would you expect technology not to be helpful here?
Ok, those are pretty good examples. Though none of them are quite complete in the sense that there’s still a bunch of human messiness with circumvention and countermeasures involved. Burglar alarms need human security personnel to back up the threat, voting is being gamed with gerrymandering and who knows what, and PageRank is probably in a constant arms race between SEO operators and Google engineers tweaking the system. They don’t work in a way where you just drop in the tech and go to sleep and have the tech solve the social conflict, though they obviously help managing the conflict, possibly in a very large degree.
The idea with discussion forums, where people spout the epigram, often seems to be that the technical solution would just tick away without human supervision and solve the social conflict. Stuff that does that is extremely hard. Stuff that’s more a tool than a complete system will need a police department or a Google or full-time discussion forum moderators to do the actual work while being helped by the tool.
Modern Bayesian spam filters are another example of a well-working technical solution to a social conflict though. Don’t know how much of an arms race something like Gmail’s filter is. This is something that is giving me the vibe of a standalone system actually solving the problem, even more than PageRank, though I don’t know the inner details of either very well.
When I hear people say “you’re proposing a technical improvement to a social problem”, they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that’s the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. “This improvement you’re proposing may be open to even greater improvement in the future!” doesn’t seem like a counter argument.
In many instances, the technology doesn’t directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That’s what we’re trying to do here.
I mostly agree with you that the statement against technical solutions is false on the face of it.
How about this: if you want to prevent certain types of discussion and interaction in an online community, the members need to have some kind of consensus against it (the “social” part of the solution). Otherwise technical measures will either be worked around (if plenty of communication can still happen) or the community will be damaged (if communication is blocked enough to achieve the stated aim).
Technical measures can change the required amount of consensus needed from complete unanimity to something more achievable.
In our case, we may not have had the required amount of consensus against feeding trolls, or of what counts as a troll to avoid feeding.
Because this involves conflict of interest, it is a security issue, and people aren’t very good at thinking about those. Often they fail to take the basic step of asking “if I were the attacker, how would I respond to this?”. See Inside the twisted mind of the security professional.
When you think of discussion forum design as a security issue, determining just what should be considered an attack can get pretty tricky. Trying to hack other people’s passwords, sure. Open spamming and verbal abuse in messages, most likely. Deliberate trolling, probably, but how easy is it to tell what the intent of a message was? Formalizing “good faith discussion” isn’t easy. What about people sincerely posting nothing but “rationalist lolcat” macro pictures on the front page and other people sincerely upvoting them? Is a clueless commenter a 14-year-old who is willing to learn forum conventions and is a bit too eager to post in the meantime, or a 57-year-old who would like to engage you in a learned debate to show you the error of your ways of thought and then present you the obvious truth of the Space Tetrahedron Theory of Everything?
I’m not sure how what you say above is meant to influence what we recommend wrt possible changes to LW.
Basically that discussion forum failure modes seem to be very complex compared to what an autonomous technical system can handle, and the discussion on improving LW seems to often skirt around the role of human moderators in favor of trying to make a forum work with simple autonomous mechanisms.
ADBOC. Email and trains might be “technical solutions for social problems” in the literal sense, but that’s not what that phrase normally means.
What does the phrase normally mean? Risto_Saarelma had one go in reply to me at restricting it to the relevant domain but that didn’t work. Could you describe what the phrase does normally mean? I’m not asking for a perfect, precise definition, but just a pointer to a cluster of correlations that it identifies.
I think it’s a misidentification of the reason why a certain class of proposed solutions to social problems do not work. The class consists of solutions which fail to take into account that people will change their behaviour as necessary to achieve whatever their purposes are, and will simply step around any easily-avoided obstacles that may be placed in their way.
The famous picture that Bruce Schneier once posted as an allegory of useless security measures, of car tracks in the snow going around barriers across the road, is an excellent example. “The Internet routes around censorship” is another.
(from The Weakest Link)
That seems like a plausible story! And was also the message I was pointing at here.
“I know when I see it”, but I’d say that e-mail and trains enable people to do what they want to do (namely, communicate and travel), whereas the prototypical “technical solutions for social problems” try to discourage people from doing what they want to do (e.g. the Prohibitionism).
Making Light and Ta-Nehisi Coates have notably good comment sections, and they have strong moderation by humans.
LessWrong could use better and/or more moderation by humans.
This seems intuitively likely, and is likely true in many cases. In the end, if you don’t have good commenters, there may not be much to be done about it on a technical level. However, it’s not obvious to me it applies here. For example, the entire karma system is a technical solution that seems to be, if not ideal, better than nothing in dealing with the social problem of filtering content on this site and Reddit.