Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google’s initial success. Why would you expect technology not to be helpful here?
Ok, those are pretty good examples. Though none of them are quite complete in the sense that there’s still a bunch of human messiness with circumvention and countermeasures involved. Burglar alarms need human security personnel to back up the threat, voting is being gamed with gerrymandering and who knows what, and PageRank is probably in a constant arms race between SEO operators and Google engineers tweaking the system. They don’t work in a way where you just drop in the tech and go to sleep and have the tech solve the social conflict, though they obviously help managing the conflict, possibly in a very large degree.
The idea with discussion forums, where people spout the epigram, often seems to be that the technical solution would just tick away without human supervision and solve the social conflict. Stuff that does that is extremely hard. Stuff that’s more a tool than a complete system will need a police department or a Google or full-time discussion forum moderators to do the actual work while being helped by the tool.
Modern Bayesian spam filters are another example of a well-working technical solution to a social conflict though. Don’t know how much of an arms race something like Gmail’s filter is. This is something that is giving me the vibe of a standalone system actually solving the problem, even more than PageRank, though I don’t know the inner details of either very well.
When I hear people say “you’re proposing a technical improvement to a social problem”, they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that’s the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. “This improvement you’re proposing may be open to even greater improvement in the future!” doesn’t seem like a counter argument.
In many instances, the technology doesn’t directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That’s what we’re trying to do here.
I mostly agree with you that the statement against technical solutions is false on the face of it.
How about this: if you want to prevent certain types of discussion and interaction in an online community, the members need to have some kind of consensus against it (the “social” part of the solution). Otherwise technical measures will either be worked around (if plenty of communication can still happen) or the community will be damaged (if communication is blocked enough to achieve the stated aim).
Technical measures can change the required amount of consensus needed from complete unanimity to something more achievable.
In our case, we may not have had the required amount of consensus against feeding trolls, or of what counts as a troll to avoid feeding.
Because this involves conflict of interest, it is a security issue, and people aren’t very good at thinking about those. Often they fail to take the basic step of asking “if I were the attacker, how would I respond to this?”. See Inside the twisted mind of the security professional.
When you think of discussion forum design as a security issue, determining just what should be considered an attack can get pretty tricky. Trying to hack other people’s passwords, sure. Open spamming and verbal abuse in messages, most likely. Deliberate trolling, probably, but how easy is it to tell what the intent of a message was? Formalizing “good faith discussion” isn’t easy. What about people sincerely posting nothing but “rationalist lolcat” macro pictures on the front page and other people sincerely upvoting them? Is a clueless commenter a 14-year-old who is willing to learn forum conventions and is a bit too eager to post in the meantime, or a 57-year-old who would like to engage you in a learned debate to show you the error of your ways of thought and then present you the obvious truth of the Space Tetrahedron Theory of Everything?
Basically that discussion forum failure modes seem to be very complex compared to what an autonomous technical system can handle, and the discussion on improving LW seems to often skirt around the role of human moderators in favor of trying to make a forum work with simple autonomous mechanisms.
How about if you substitute “nontrivial social conflict” with “social problem”?
Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google’s initial success. Why would you expect technology not to be helpful here?
Ok, those are pretty good examples. Though none of them are quite complete in the sense that there’s still a bunch of human messiness with circumvention and countermeasures involved. Burglar alarms need human security personnel to back up the threat, voting is being gamed with gerrymandering and who knows what, and PageRank is probably in a constant arms race between SEO operators and Google engineers tweaking the system. They don’t work in a way where you just drop in the tech and go to sleep and have the tech solve the social conflict, though they obviously help managing the conflict, possibly in a very large degree.
The idea with discussion forums, where people spout the epigram, often seems to be that the technical solution would just tick away without human supervision and solve the social conflict. Stuff that does that is extremely hard. Stuff that’s more a tool than a complete system will need a police department or a Google or full-time discussion forum moderators to do the actual work while being helped by the tool.
Modern Bayesian spam filters are another example of a well-working technical solution to a social conflict though. Don’t know how much of an arms race something like Gmail’s filter is. This is something that is giving me the vibe of a standalone system actually solving the problem, even more than PageRank, though I don’t know the inner details of either very well.
When I hear people say “you’re proposing a technical improvement to a social problem”, they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that’s the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. “This improvement you’re proposing may be open to even greater improvement in the future!” doesn’t seem like a counter argument.
In many instances, the technology doesn’t directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That’s what we’re trying to do here.
I mostly agree with you that the statement against technical solutions is false on the face of it.
How about this: if you want to prevent certain types of discussion and interaction in an online community, the members need to have some kind of consensus against it (the “social” part of the solution). Otherwise technical measures will either be worked around (if plenty of communication can still happen) or the community will be damaged (if communication is blocked enough to achieve the stated aim).
Technical measures can change the required amount of consensus needed from complete unanimity to something more achievable.
In our case, we may not have had the required amount of consensus against feeding trolls, or of what counts as a troll to avoid feeding.
Because this involves conflict of interest, it is a security issue, and people aren’t very good at thinking about those. Often they fail to take the basic step of asking “if I were the attacker, how would I respond to this?”. See Inside the twisted mind of the security professional.
When you think of discussion forum design as a security issue, determining just what should be considered an attack can get pretty tricky. Trying to hack other people’s passwords, sure. Open spamming and verbal abuse in messages, most likely. Deliberate trolling, probably, but how easy is it to tell what the intent of a message was? Formalizing “good faith discussion” isn’t easy. What about people sincerely posting nothing but “rationalist lolcat” macro pictures on the front page and other people sincerely upvoting them? Is a clueless commenter a 14-year-old who is willing to learn forum conventions and is a bit too eager to post in the meantime, or a 57-year-old who would like to engage you in a learned debate to show you the error of your ways of thought and then present you the obvious truth of the Space Tetrahedron Theory of Everything?
I’m not sure how what you say above is meant to influence what we recommend wrt possible changes to LW.
Basically that discussion forum failure modes seem to be very complex compared to what an autonomous technical system can handle, and the discussion on improving LW seems to often skirt around the role of human moderators in favor of trying to make a forum work with simple autonomous mechanisms.