In any case, I do have a solution for you: why don’t you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma?
This would disrupt the flow of discussion.
I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but...
Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can’t filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won’t, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too.
People have evolved some communication strategies that don’t work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn’t tell the person openly to buzz off. But when we speak online, and I ignore someone’s comments, you don’t see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.)
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
So I did not have to read bad comments, but I had to read the reactions on them.
I see, so you felt that the comments of “smart” (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to “stupid” (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.
Slashdot has something like this (though not exactly). I think it’s a neat idea. If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
And everyone’s assumptions are different, which is why I’m very much against global solutions such as “ban everyone who hadn’t read the Sequences”, or something to that extent.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices. That’s just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting.
It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.)
If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
I think a reasonble approach is to calculate the probability of “reasonable disagreement” based on the previous comments. This is something that we approximately do in real life—based on our previous experience we take some people’s opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
I think that most people use default settings, so this system would work best on a site where this would be the default setting.
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel.
It would be interesting to find a good algorithm, which would have the following data on input...
This sounds pretty similar to Google’s PageRank, only for comments instead of pages. Should be doable.
If the site is visible for anonymous readers, a global karma is necessary.
Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we’d have a whole lot of other problems to worry about :-)
That said, I personally tend to distrust my emotions. I’d rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn’t want to join a darknet such as yours. That’s just me though, your experience is probably different.
This would disrupt the flow of discussion.
I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but...
Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can’t filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won’t, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too.
People have evolved some communication strategies that don’t work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn’t tell the person openly to buzz off. But when we speak online, and I ignore someone’s comments, you don’t see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.)
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
I see, so you felt that the comments of “smart” (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to “stupid” (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
Slashdot has something like this (though not exactly). I think it’s a neat idea. If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
And everyone’s assumptions are different, which is why I’m very much against global solutions such as “ban everyone who hadn’t read the Sequences”, or something to that extent.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices. That’s just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting.
It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.)
If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
I think a reasonble approach is to calculate the probability of “reasonable disagreement” based on the previous comments. This is something that we approximately do in real life—based on our previous experience we take some people’s opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel.
This sounds pretty similar to Google’s PageRank, only for comments instead of pages. Should be doable.
Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else.
Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we’d have a whole lot of other problems to worry about :-)
That said, I personally tend to distrust my emotions. I’d rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn’t want to join a darknet such as yours. That’s just me though, your experience is probably different.