There is no strong reason that reasonable, informative discourse should be an attractor for online communities. Measures like karma or censorship are designed to address particular problems that people have observed; they aren’t even intended to be a real solution to the general issue. If you happen to end up with a community where most conversation is intelligent, then I think the best you can say is that you were lucky for a while.
The question is, do people think that this is the nature of community? There is a possible universe (possible with respect to my current logical uncertainty) in which communities are necessarily reliant on vigilance to survive. There is also a possible universe where there are fundamentally stable solutions to this problem. In such a universe, a community can survive the introduction of many malicious or misguided users because its dynamics are good rather than because its moderator is vigilant. I strongly, strongly suspect that we live in the second universe. If we do, I think trying to solve this problem is important (fostering intelligent discourse is more important than the sum of all existing online communities). I don’t mean saying “lets try and change karma in this way and see what happens;” I mean saying, “lets try and describe some properties that would be desirable for the dynamics of the community to satisfy and then try and implement a system which provably satisfies them.”
I think in general that people too often say “look at this bad thing that happened; I wish people were better” instead of “look at this bad thing that happened; I wish the system required less of people.” I guess the real question is whether there are many cases where fundamental improvements to the system are possible and tractable. I suspect there are, and that in particular moderating online discussion is such a case.
“lets try and describe some properties that would be desirable for the dynamics of the community to satisfy and then try and implement a system which provably satisfies them.”
This might actually be a good idea. If LessWrong could beget the formulation of some theory of good online communities (not just a set of rules that make online communities look like real-world communities because they work), that would certainly say something for our collective instrumental rationality.
There is no strong reason that reasonable, informative discourse should be an attractor for online communities. Measures like karma or censorship are designed to address particular problems that people have observed; they aren’t even intended to be a real solution to the general issue. If you happen to end up with a community where most conversation is intelligent, then I think the best you can say is that you were lucky for a while.
The question is, do people think that this is the nature of community? There is a possible universe (possible with respect to my current logical uncertainty) in which communities are necessarily reliant on vigilance to survive. There is also a possible universe where there are fundamentally stable solutions to this problem. In such a universe, a community can survive the introduction of many malicious or misguided users because its dynamics are good rather than because its moderator is vigilant. I strongly, strongly suspect that we live in the second universe. If we do, I think trying to solve this problem is important (fostering intelligent discourse is more important than the sum of all existing online communities). I don’t mean saying “lets try and change karma in this way and see what happens;” I mean saying, “lets try and describe some properties that would be desirable for the dynamics of the community to satisfy and then try and implement a system which provably satisfies them.”
I think in general that people too often say “look at this bad thing that happened; I wish people were better” instead of “look at this bad thing that happened; I wish the system required less of people.” I guess the real question is whether there are many cases where fundamental improvements to the system are possible and tractable. I suspect there are, and that in particular moderating online discussion is such a case.
This might actually be a good idea. If LessWrong could beget the formulation of some theory of good online communities (not just a set of rules that make online communities look like real-world communities because they work), that would certainly say something for our collective instrumental rationality.