Not to mention the quis custodiet ipsos custodes? problem.
You can’t create an algorithm for generally promoting good comments—that would have to be an artificial intelligence that would recognize a good comment from a bad one. You can only create algorithms that make it more easy or more difficult to protect the community values… whatever they are.
Imagine a website with 10 people, where 11th person comes a writes a good comment. But for some irrational reasons, the original 10 people all dislike the comment. Does the system allow them to remove the comment? Yes or no?
If you say “yes”, you have the “quis custodiet ipsos custodes” situation. But if you say “no”, then the situation will be exactly the same if the 11th persons posts a genuinely bad comment… the original 10 people will not be allowed to remove it. Which is bad, and much more frequent.
the rest of the visitors still see everything there is.
That’s not a good solution! It means that if there are hundreds of trollish comments on the website, regardless of how all my friends downvote them, I still have to see all of them. Too much noise.
That’s not a good solution! It means that if there are hundreds of trollish comments on the website, regardless of how all my friends downvote them, I still have to see all of them.
We will have to disagree about that.
I explicitly do NOT want other people to filter my information input. Don’t take this as an absolute—I’m fine with spam filters—but at this point in this particular context we do not have ” hundreds of trollish comments” and what’s often downvoted is what the local population disagrees with.
at this point in this particular context we do not have ” hundreds of trollish comments”
I believe it’s because we are a relatively unknown website. We had a few trolls in the past, but they gradually went away or had their accounts deleted. With more fame, this could change… although until that happens, I cannot provide exact data.
You can’t create an algorithm for generally promoting good comments—that would have to be an artificial intelligence that would recognize a good comment from a bad one. You can only create algorithms that make it more easy or more difficult to protect the community values… whatever they are.
Imagine a website with 10 people, where 11th person comes a writes a good comment. But for some irrational reasons, the original 10 people all dislike the comment. Does the system allow them to remove the comment? Yes or no?
If you say “yes”, you have the “quis custodiet ipsos custodes” situation. But if you say “no”, then the situation will be exactly the same if the 11th persons posts a genuinely bad comment… the original 10 people will not be allowed to remove it. Which is bad, and much more frequent.
That’s not a good solution! It means that if there are hundreds of trollish comments on the website, regardless of how all my friends downvote them, I still have to see all of them. Too much noise.
We will have to disagree about that.
I explicitly do NOT want other people to filter my information input. Don’t take this as an absolute—I’m fine with spam filters—but at this point in this particular context we do not have ” hundreds of trollish comments” and what’s often downvoted is what the local population disagrees with.
I don’t want another echo chamber.
I believe it’s because we are a relatively unknown website. We had a few trolls in the past, but they gradually went away or had their accounts deleted. With more fame, this could change… although until that happens, I cannot provide exact data.