I don’t see why having the debate at a higher level of knowledge would be a bad thing.
Firstly, a large proportion of the Sequences do not constitute “knowledge”, but opinion. It’s well-reasoned, well-presented opinion, but opinion nonetheless—which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren’t in the sequences, that’s fun too. Secondly:
Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting.
No, it’s not very interesting to you and me, but to the “uneducated folks” whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it’s easily correctable. However, kicking people out for being ignorant does not facilitate such correction.
The point of my post was that that is not an acceptable solution.
What’s your solution, then ? You say,
I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don’t have to agree with me, but I’d just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.
To me, “more exclusive LW” sounds exactly like the kind of solution that doesn’t work, especially coupled with “enforcing a little more strongly that people read the sequences” (in some unspecified yet vaguely menacing way).
Firstly, a large proportion of the Sequences do not constitute “knowledge”, but opinion. It’s well-reasoned, well-presented opinion, but opinion nonetheless—which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren’t in the sequences, that’s fun too. Secondly:
Whether the sequences constitute knowledge is beside the point—they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren’t willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people.
I’ve participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone’s head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I’ve been a part of in this respect, but I have noticed a slow decline in this department.
All of that said, I’m not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn’t sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I’ve been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I’d like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.
I agree with pretty much everything you said (except for the sl4 stuff, because I haven’t been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions—outreach as well as technical. I’m not proposing that we set the barrier to entry at zero; I merely think that the guideline, “you must have read and understood all of the Sequences before posting anything” sets the barrier too high.
I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it’s possible that I’m biased in this regard, since I myself do not agree with everything Eliezer says in those posts.
Disagreement is perfectly fine by me. I don’t agree with the entirety of the sequences either. It’s disagreement without looking at the arguments first that bothers me.
Sequences do not constitute “knowledge”, but opinion.
What is the difference between knowledge and opinion? Are the points in the sequences true or not?
Read map and territory, and understand the way of Bayes.
No, it’s not very interesting to you and me, but to the “uneducated folks” whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it’s easily correctable. However, kicking people out for being ignorant does not facilitate such correction.
The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there’s no qualitative difference between opinion and fact.
To me, “more exclusive LW” sounds exactly like the kind of solution that doesn’t work, especially coupled with “enforcing a little more strongly that people read the sequences” (in some unspecified yet vaguely menacing way).
I don’t have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don’t want a particular type of solution (relaxing LW’s phygish standards), doesn’t mean I support any particular straw-solution.
What is the difference between knowledge and opinion? Are the points in the sequences true or not?
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can’t tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO.
Read map and territory, and understand the way of Bayes.
See, it’s exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words.
...without getting bogged down by having to explain that there’s no qualitative difference between opinion and fact.
I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, “rocks tend to fall down” is a fact, but “the Singularity is imminent” is an opinion—in my opinion—and so is “we should kick out anyone who hadn’t read the entirety of the Sequences”.
Identifying a problem and identifying myself among people who don’t want a particular type of solution (relaxing LW’s phygish standards), doesn’t mean I support any particular straw-solution.
When you said “we should make LW more exclusive”, what did you mean, then ?
In any case, I do have a solution for you: why don’t you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own “phyg” and make it as exclusive as you want.
In any case, I do have a solution for you: why don’t you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma?
This would disrupt the flow of discussion.
I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but...
Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can’t filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won’t, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too.
People have evolved some communication strategies that don’t work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn’t tell the person openly to buzz off. But when we speak online, and I ignore someone’s comments, you don’t see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.)
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
So I did not have to read bad comments, but I had to read the reactions on them.
I see, so you felt that the comments of “smart” (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to “stupid” (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.
Slashdot has something like this (though not exactly). I think it’s a neat idea. If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
And everyone’s assumptions are different, which is why I’m very much against global solutions such as “ban everyone who hadn’t read the Sequences”, or something to that extent.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices. That’s just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting.
It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.)
If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
I think a reasonble approach is to calculate the probability of “reasonable disagreement” based on the previous comments. This is something that we approximately do in real life—based on our previous experience we take some people’s opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
I think that most people use default settings, so this system would work best on a site where this would be the default setting.
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel.
It would be interesting to find a good algorithm, which would have the following data on input...
This sounds pretty similar to Google’s PageRank, only for comments instead of pages. Should be doable.
If the site is visible for anonymous readers, a global karma is necessary.
Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we’d have a whole lot of other problems to worry about :-)
That said, I personally tend to distrust my emotions. I’d rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn’t want to join a darknet such as yours. That’s just me though, your experience is probably different.
I mean that I’d like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don’t think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea.
And I’m not afraid of being a phyg.
I mean that I’d like to be able to participate in discussion with better (possibly phygish) standards.
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn’t enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is.
And I’m not afraid of being a phyg.
I’m not afraid of it either, I just don’t think that power-sliding down a death spiral is a good idea. I don’t need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
Probably. The same sentiment could be expressed as something like this:
The map is not the territory; if you understood how Bayesian updating works, you would know that facts and opinions are qualitatively the same.
This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
Merely telling people that they aren’t worthy is not very educational; it’s much better to tell them why you think they aren’t worthy, which is where the links come in.
What I would have said about the phrasing is that it is wrong.
Sure, but I have no problem with people being wrong, that’s what updating is for :-)
Merely telling people that they aren’t worthy is not very educational; it’s much better to tell them why you think they aren’t worthy, which is where the links come in.
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points—your retort seems odd.
Sure, but I have no problem with people being wrong, that’s what updating is for :-)
Huh again? You seemed to have missed a level of abstraction.
I want to upvote you for being right about the phygvfu language and where to put the filter.
I want to downvote you for just about evetrything else. You are discourteous, butthurt, and there’s more than a bit of Dunning-Krueger stuck in your teeth.
Breathe, son. He don’t wanna kick you out, he just wishes it didn’t seem like such a damn good idea.
Well, that certainly wasn’t my intention. Please go ahead and downvote me if you feel that way.
He don’t wanna kick you out...
I wasn’t taking anything he said personally; as far as I can tell, there’s nothing he can do to actually kick me out, and I don’t think he even wants to do that in the first place. I do believe he’s arguing in good faith.
That said, I do believe strongly that, in order for communities to grow and continue being useful and productive, they need to welcome new members now and then; and I think that nyan_sandwich’s original solution sets the barrier to entry way too high.
Please go ahead and downvote me if you feel that way.
You’re too kind. Of course I already did. I just wish you’d somehow split up the things I wanted to respond.
Aside, I didn’t downvote the post I quoted and I don’t know why someone would. Maybe because we’re speaking pointlessly? Maybe because they thought I was trolling and you were feeding me?
Firstly, a large proportion of the Sequences do not constitute “knowledge”, but opinion. It’s well-reasoned, well-presented opinion, but opinion nonetheless—which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren’t in the sequences, that’s fun too. Secondly:
No, it’s not very interesting to you and me, but to the “uneducated folks” whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it’s easily correctable. However, kicking people out for being ignorant does not facilitate such correction.
What’s your solution, then ? You say,
To me, “more exclusive LW” sounds exactly like the kind of solution that doesn’t work, especially coupled with “enforcing a little more strongly that people read the sequences” (in some unspecified yet vaguely menacing way).
Whether the sequences constitute knowledge is beside the point—they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren’t willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people.
I’ve participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone’s head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I’ve been a part of in this respect, but I have noticed a slow decline in this department.
All of that said, I’m not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn’t sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I’ve been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I’d like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.
I agree with pretty much everything you said (except for the sl4 stuff, because I haven’t been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions—outreach as well as technical. I’m not proposing that we set the barrier to entry at zero; I merely think that the guideline, “you must have read and understood all of the Sequences before posting anything” sets the barrier too high.
I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it’s possible that I’m biased in this regard, since I myself do not agree with everything Eliezer says in those posts.
Disagreement is perfectly fine by me. I don’t agree with the entirety of the sequences either. It’s disagreement without looking at the arguments first that bothers me.
What is the difference between knowledge and opinion? Are the points in the sequences true or not?
Read map and territory, and understand the way of Bayes.
The thing is, there are other places on the internet where you can talk to people who have not read the sequences. I want somewhere where I can talk to people who have read the LW material, so that I can have a worthwile discussion without getting bogged down by having to explain that there’s no qualitative difference between opinion and fact.
I don’t have any really good ideas about how we might be able to have an enlightened discussion and still be friendly to newcomers. Identifying a problem and identifying myself among people who don’t want a particular type of solution (relaxing LW’s phygish standards), doesn’t mean I support any particular straw-solution.
Some proportion of them (between 0 and 100%) are true, others are false or neither. Not being omniscient, I can’t tell you which ones are which; I can only tell you which ones I believe are likely to be true with some probability. The proportion of those is far smaller than 100%, IMO.
See, it’s exactly this kind of ponderous verbiage that leads to the necessity for rot13-ing certain words.
I believe that there is a significant difference between opinion and fact, though arguably not a qualitative one. For example, “rocks tend to fall down” is a fact, but “the Singularity is imminent” is an opinion—in my opinion—and so is “we should kick out anyone who hadn’t read the entirety of the Sequences”.
When you said “we should make LW more exclusive”, what did you mean, then ?
In any case, I do have a solution for you: why don’t you just code up a Greasemonkey scriptlet (or something similar) to hide the comments of anyone with less than, say, 5000 karma ? This way you can browse the site in peace, without getting distracted by our pedestrian mutterings. Better yet, you could have your scriptlet simply blacklist everyone by default, except for certain specific usernames whom you personally approve of. Then you can create your own “phyg” and make it as exclusive as you want.
This would disrupt the flow of discussion.
I tried this on one site. The script did hide the offending comments from my eyes, but other people still saw those comments and responded to them. So I did not have to read bad comments, but I had to read the reactions on them. I could have improved by script to filter out those reactions too, but...
Humans react to the environment. We cannot consciously decide to filter out something and refuse to be influenced. If I come to a discussion with 9 stupid comments and 1 smart comment, my reaction will be different than if there was only the 1 smart comment. I can’t filter those 9 comments out. Reading them wastes my time and changes my emotions. So even if you filter those 9 comments out by software, but I won’t, then the discussion between two of us will be indirectly influenced by those comments. Most probably, if I see 9 stupid comments, I will stop reading the article, so I will skip the 1 smart one too.
People have evolved some communication strategies that don’t work on internet, because a necessary infrastructure is missing. If we two would speak in the real world, and a third person tried to join our discussion, but I consider them rather stupid, you would see it in my body language even if I wouldn’t tell the person openly to buzz off. But when we speak online, and I ignore someone’s comments, you don’t see it; this communication channel is missing. Karma does something like this, it just represents the collective emotion instead of individual emotion. (Perhaps a better approximation would be if the software allowed you to select people you consider smart, and then you would see karma based only on their clicks.)
Creating a good virtual discussion is difficult, because our instincts are based on different assumptions.
I see, so you felt that the comments of “smart” (as per your filtering criteria) people were still irrevocably tainted by the fact that they were replying to “stupid” (as per your filtering criteria) people. In this case, I think you could build upon my other solution. You could blacklist everyone by default, then personally contact individual “smart” people and invite them to your darknet. The price of admission is to blacklist everyone but yourself and the people you personally approve of. When someone breaks this policy, you could just blacklist them again.
Slashdot has something like this (though not exactly). I think it’s a neat idea. If you implemented this, I’d even be interested in trying it out, provided that I could see the two scores (smart-only as well as all-inclusive) side-by-side.
And everyone’s assumptions are different, which is why I’m very much against global solutions such as “ban everyone who hadn’t read the Sequences”, or something to that extent.
Personally, though, I would prefer to err on the side of experiencing negative emotions now and then. I do not want to fall into a death spiral that leads me to forming a cabal of people where everyone agrees with each other, and we spend all day talking about how awesome we are—which is what nearly always happens when people decide to shut out dissenting voices. That’s just my personal choice, though; anyone else should be able to form whichever cabal they desire, based on their own preferences.
The first step (blacklisting everyone except me and people I approve of) is easy. Expanding network depends on other people joining the same system, or at least willing to send me a list of people they approve of. I think that most people use default settings, so this system would work best on a site where this would be the default setting.
It would be interesting to find a good algorithm, which would have the following data on input: each user can put other users on their whitelist or blacklist, and can upvote or downvote comments by other users. It could somehow calculate the similarity of opinions and then show everyone the content they want (extrapolated volition) to see. (The explicit blacklists exist only to override the recommendations of the algorithm. By default, an unknown and unconnected person is invisible, except for their comments upvoted by my friends.)
If the site is visible for anonymous readers, a global karma is necessary. Though it can be somehow calculated from the customized karmas.
I also wouldn’t like to be shielded from disagreeing opinions. I want to be shielded from stupidity and offensiveness, to protect my emotions. Also, because my time is limited, I want to be shielded from noise. No algorithm will be perfect in filtering out the noise and not filtering out the disagreement.
I think a reasonble approach is to calculate the probability of “reasonable disagreement” based on the previous comments. This is something that we approximately do in real life—based on our previous experience we take some people’s opinion more seriously, so when someone disagrees with us, we react differently based on who it is. If I agree with someone about many things, then I will consider their opinion more seriously when we disagree. However if someone disagrees about almost everything, I simply consider them crazy.
I think this is a minor convenience at best; when you choose to form your darknet, you could simply inform the other candidates of your plan: via email, PM, or some other out-of-band channel.
This sounds pretty similar to Google’s PageRank, only for comments instead of pages. Should be doable.
Yes, of course. The goal is not to turn the entire site exclusively into your darknet, but to allow you to run your darknet in parallel with the normal site as seen by everyone else.
Agreed; if you could figure out a perfect filtering algorithm, you would end up implementing an Oracle-grade AI, and then we’d have a whole lot of other problems to worry about :-)
That said, I personally tend to distrust my emotions. I’d rather take an emotional hit, than risk missing some important point just because it makes me feel bad; thus, I wouldn’t want to join a darknet such as yours. That’s just me though, your experience is probably different.
I mean that I’d like to be able to participate in discussion with better (possibly phygish) standards. Lesswrong has a lot of potential and I don’t think we are doing as well as we could on the quality of discusson front. And I think making Lesswrong purely more open and welcoming without doing something to keep a high level of quality somewhere is a bad idea. And I’m not afraid of being a phyg.
That’s all, nothing revolutionary.
It seems like my proposed solution would work for you, then. With it, you can ignore anyone who isn’t enlightened enough, while keeping the site itself as welcoming and newbie-friendly as it currently is.
I’m not afraid of it either, I just don’t think that power-sliding down a death spiral is a good idea. I don’t need people to tell me how awesome I am, I want them to show me how wrong I am so that I can update my beliefs.
Specifically ‘the way of’. Would you have the same objection with ‘and understand how bayesian updating works’? (Objection to presumptuousness aside.)
Probably. The same sentiment could be expressed as something like this:
This phrasing is still a bit condescending, but a). it gives an actual link for me to read an educate my ignorant self, and b). it makes the speaker sound merely like a stuck-up long-timer, instead of a creepy phyg-ist.
Educating people is like that!
What I would have said about the phrasing is that it is wrong.
Merely telling people that they aren’t worthy is not very educational; it’s much better to tell them why you think they aren’t worthy, which is where the links come in.
Sure, but I have no problem with people being wrong, that’s what updating is for :-)
Huh? This was your example, one you advocated and one that includes a link. I essentially agreed with one of your points—your retort seems odd.
Huh again? You seemed to have missed a level of abstraction.
I want to upvote you for being right about the phygvfu language and where to put the filter.
I want to downvote you for just about evetrything else. You are discourteous, butthurt, and there’s more than a bit of Dunning-Krueger stuck in your teeth.
Breathe, son. He don’t wanna kick you out, he just wishes it didn’t seem like such a damn good idea.
Well, that certainly wasn’t my intention. Please go ahead and downvote me if you feel that way.
I wasn’t taking anything he said personally; as far as I can tell, there’s nothing he can do to actually kick me out, and I don’t think he even wants to do that in the first place. I do believe he’s arguing in good faith.
That said, I do believe strongly that, in order for communities to grow and continue being useful and productive, they need to welcome new members now and then; and I think that nyan_sandwich’s original solution sets the barrier to entry way too high.
You’re too kind. Of course I already did. I just wish you’d somehow split up the things I wanted to respond.
Aside, I didn’t downvote the post I quoted and I don’t know why someone would. Maybe because we’re speaking pointlessly? Maybe because they thought I was trolling and you were feeding me?