Well, one has to be ultra careful to keep number of contrary ideas very low within a post
Yes, this seems likely.
one has to already have a giant body of posts aligning with the opinions (and it is boring to just generate texts that are in agreement).
I also find it boring to generate texts that are in agreement, and hence rarely do so. I don’t think that’s the main issue.
edit: Also you may have way more skill at converting people to contrary ideas than I do. I lose patience.
I don’t think “skill at converting people” and “patience” are the right way to think about it either. I think what helps are:
Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.
Write clearly. Isolate one particular idea or line of argument at a time and try to explain it as clearly as possible before introducing another one.
Know existing work on your subject and explain how they relate (or why they aren’t relevant) to your ideas, or why they are wrong or incomplete. Most people, when they’re handed a problem that has stumped others for years, or is the subject of some long running debate, seem to still assume that they can solve it with a few days of thought, without researching the existing ideas and arguments, and quickly convince everyone else of their correctness. If you’re not such a person, then signal it credibly!
Forget about fairness (in case you’re thinking why Eliezer and his supporters get held to a different standard). Without Eliezer there would be no LessWrong and the next best discussion forum for these topics would probably be significantly worse. So be happy with what we’ve got and maybe work to improve it on the margins. There’s no point in thinking “my posts ought to receive the same treatment as those of FAI boosters, therefore I refuse to do more”.
Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.
TBH with this community i’m feeling i’m dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.
The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn’t it? It is also, a discussion post. At the same time what I do not say, is ‘lets go ahead and implement AI based on this’, or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty—even though this can not be inferred from anything. The disagreement I get also is that of utter—crackpot grade—certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.
For example—it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.
Forget about fairness
I’ll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, ‘less you contradict any of it while trying to do any form of search for any kind of solution. What you’re doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your ‘you have not proved it’ still leaks into ‘your belief is wrong’ just as much as for anyone else, because that’s how brains work, nearby concepts collapse, and just because you know they do doesn’t magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now—that you know—you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.
Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.
The thing is, the idea that our values may have something to do with complexity isn’t a new one. See this thread for example. It’s the kind of idea that occurs to a lot of smart people, but doesn’t seem to lead anywhere interesting (e.g., some formal definition of complexity that actually explains our apparent values, or good arguments for why such a definition must exist). What you see as unreasonable certainty may just reflect the fact that you’re not offering anything new (or if you are, it’s not clearly expressed) and others have already thought it over and decided that “complexity based moral values” is a dead end. If you don’t want to take their word for it and find their explanations unsatisfactory, you’ll just have to push ahead yourself and come back when you have stronger and/or clearer arguments (or decide that they’re right after all).
TBH with this community i’m feeling i’m dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.
And this community gets the impression that they are dealing with what amounts to a straw-man generator. Let’s agree to disagree.
I’ll just go to a less pathological place.
Please do. As you have said, you can expect to achieve more social success with your preferred behaviors if you execute them in different social hierarchies. And success here would require drastically changing how you behave in response to social incentives and local standards—something that you are not willing to do. So of you go elsewhere everybody wins. You can continue to believe you are superior to us and all disagreement with you is the result of us being brainwashed or inferior or whatever and we can go about and have more enjoyable conversations.
Really, you don’t need to write a whole series of comments to ‘break up with us’. You can just click the logout button and type a new address into the address bar. Parting declarations of superiority don’t really achieve much.
I thought Dmytry sometimes has interesting ideas, and it’d be worth trying to convince him to stick around but be more careful and less adversarial. As orthonormal said, LW needs better contrarians, and Dmytry seems like one of the more promising candidates. Why tell him to go away? Do you think my effort was doomed or counterproductive?
I thought Dmytry sometimes has interesting ideas, and it’d be worth trying to convince him to stick around but be more careful and less adversarial. As orthonormal said, LW needs better contrarians, and Dmytry seems like one of the more promising candidates.
There is some potential there—Dmytry has what seems to be a decent IQ and some technical knowledge in there somewhere. But the indications suggest that he has more potential to be destructive than useful. I would expect him to end up as a XiXiDu only far more powerful (more intelligent and rhetorically proficient) and far more hostile (XiXiDu’s attitude hovers just on the border, Dmytry given time would be more consistently hostile).
Why tell him to go away?
His idea, I merely agree that it would benefit him and us. For what it is worth I don’t think my agreement is likely to encourage him to leave. If anything he would be inclined to do the opposite of what my preference is.
In terms of my own personal interests—I incur a cost when there are people like Dmytry around. My nature (and considered, self-endorsed nature at that) is such that when I see people try to intellectually bully others with disingenuous non-sequiturs and straw men I am naturally inclined to interfere. Dmytry is far from the worst I’ve seen in this regard but he’s not too far down the list.
If the guy wants to leave and has concluded we are too toxic for him then I’m not going to argue with that. It seems better for everyone. Arrogant nerds are a dime a dozen—we have plenty around here so don’t need another. And communities where one can show off technical competence and rhetorical flair are a dime a dozen too so Dmytry doesn’t need us. I’d recommend he try MENSA. He would fit in well (based on what I recall of my time there and what I have seen of Dmytry.)
Do you think my effort was doomed or counterproductive?
If anything he would be inclined to do the opposite of what my preference is.
Why did you do it then?
Doomed.
Sigh… I should probably just let it go, given that it was a long shot anyway, but it’s kind of frustrating to have put in the effort, and not even get a clean negative result back as evidence.
Well, he has written 9 discussion posts with >10 karma in the last 4 months or so. Do you not like any of them? Or think of it this way: if he is the kind of person we want to drive away instead of help better fit into our community, then where are we going to find those “better contrarians”?
Looking through his posts, most are downvoted, and the bulk of his karma seems to be coming from a conjunction fallacy post which says nothing new that wasn’t covered in previous posts by say Eliezer (or myself, in prediction-related posts), and another content-less post composed pretty much just of discussion (of a very low level). Brain shrinkage was a good topic, but unlike my essay on similar topics (covering brain shrinkage as a special case), Dmytry completely fails to bring the references. And so on.
I don’t want to mention specific posts, since that would probably get me involved in a debate over the exact merits of those posts, but it seems like you missed the two posts with the highest upvotes. And yes, most of his posts are downvoted, but my guess is that it’s easier to teach someone to avoid posting bad ideas than to come up with even semi-good ones.
Anyway, I don’t want to argue too much over this. If, all things considered, you (or wedrifid) don’t think there’s much chance that Dmytry could become someone that would make LW better instead of worse, that’s fine with me. I just wanted to make sure it was a considered decision on wedrifid’s part to push Dmytry to leave, and not just an emotional reaction.
I just wanted to make sure it was a considered decision on wedrifid’s part to push Dmytry to leave, and not just an emotional reaction.
Considered and strategic but not committed to and considered without awareness of your degree of personal interest. In such a circumstance if I knew there was someone with a particular interest in working with a (shall we call them ‘candidate’?) I would stand back and refrain from replying or interacting with the candidate except in those circumstances where they are directly hampering the contributions of others.
When it comes to handling such situations better in the future it occurs to me that the material you have written already in your various comments here would make a decent post (“How to be a productive contrarian?”). If that were available as a post then when when the next guy came along and started saying “You guys disagree with me therefore you are all a bunch of brainwashed group thinking fools” we could fog and say “It’s true, there is plenty of group think on lesswrong. Wei_Dai wrote this post on how he manages it.” That would be equally as true as the response “You’re actually getting downvoted because you’re wrong and acting like a dick. STFU.” but far more useful!
In fact, your advice (including what to do instead of worrying about ‘fairness’) generalizes well to dealing with new challenging social situations of all kinds.
Or think of it this way: if he is the kind of person we want to drive away instead of help better fit into our community, then where are we going to find those “better contrarians”?
In my experience you don’t find ‘better contrarians’ among people who are naturally contrary and have a chip on their shoulder. A good contrarian will mostly agrees with stuff (unless the community they are in really is defective) - but thinks things through and then carefully presents their contrary positions as though they are making a natural contribution.
Don’t seek the contrariness. Seek good thinking and willingness to contribute. You get the contrarian positions for free when the generally good thinking gets results. For example you get lukeprog.
Yes, this seems likely.
I also find it boring to generate texts that are in agreement, and hence rarely do so. I don’t think that’s the main issue.
I don’t think “skill at converting people” and “patience” are the right way to think about it either. I think what helps are:
Establish a track record of being a careful thinker, who usually spends a lot of time looking for holes in their own ideas and arguments before posting them. And not in a cursory way or out of a sense of obligation, but because you know deep down that most new ideas, including your own, and even new arguments pointing out that other ideas are wrong, are wrong. Looks for steps in your argument that are weak. Intuitions that other people may not share. Equally plausible arguments with contradictory conclusions. Analogous arguments that lead to obviously wrong conclusions. Alternative hypotheses that can explain your observations.
Write clearly. Isolate one particular idea or line of argument at a time and try to explain it as clearly as possible before introducing another one.
Know existing work on your subject and explain how they relate (or why they aren’t relevant) to your ideas, or why they are wrong or incomplete. Most people, when they’re handed a problem that has stumped others for years, or is the subject of some long running debate, seem to still assume that they can solve it with a few days of thought, without researching the existing ideas and arguments, and quickly convince everyone else of their correctness. If you’re not such a person, then signal it credibly!
Forget about fairness (in case you’re thinking why Eliezer and his supporters get held to a different standard). Without Eliezer there would be no LessWrong and the next best discussion forum for these topics would probably be significantly worse. So be happy with what we’ve got and maybe work to improve it on the margins. There’s no point in thinking “my posts ought to receive the same treatment as those of FAI boosters, therefore I refuse to do more”.
TBH with this community i’m feeling i’m dealing with some people who got in general a very deeply flawed approach to the thought which in a subtle way breaks problem solving, and especially cooperative problem solving.
The topic here is fuzzy, and I do say that it is rather unfinished; it is implied that I think it may not be true, doesn’t it? It is also, a discussion post. At the same time what I do not say, is ‘lets go ahead and implement AI based on this’, or something similar. It is immediately presumed of me, that I has posted this with utter and complete certainty—even though this can not be inferred from anything. The disagreement I get also is that of utter—crackpot grade—certainty that no theres no way it is in any way related to human moral decisionmaking. Yes, I do not have a proof, or particularly convincing argument that it is related; that is absolutely true, and I do not think I do. At the same time, the point is to look and see how it may enhance the understanding.
For example—it is plausible that we humans do use size of our internal representation of concepts as proxy for something, because it generally e.g. associates with closer people, etc. Assuming any kind of compression, size of internal representation is a form of complexity measure.
I’ll just go to a less pathological place. The issue is not the fairness; here it is not enough (nor needed) to have any domain specific knowledge (such as e.g. knowing that size of compressed representation = some form of complexity). What is necessary, is very extensive knowledge of a large body of half baked (or entirely un-baked while verbose), vague stuff, ‘less you contradict any of it while trying to do any form of search for any kind of solution. What you’re doing here is pathologically counter productive to any form of problem solving that involves several individuals (and likely counter productive to problem solving by individuals as well). You (lesswrong) are still apes with pretensions, and your ‘you have not proved it’ still leaks into ‘your belief is wrong’ just as much as for anyone else, because that’s how brains work, nearby concepts collapse, and just because you know they do doesn’t magically make it not so; the purpose of knowing fallibility of human brain is not for (frankly, very naive) assumption that now—that you know—you are magically not fallible. This is like those toy decision agents that second guess themselves into a faulty answer.
The thing is, the idea that our values may have something to do with complexity isn’t a new one. See this thread for example. It’s the kind of idea that occurs to a lot of smart people, but doesn’t seem to lead anywhere interesting (e.g., some formal definition of complexity that actually explains our apparent values, or good arguments for why such a definition must exist). What you see as unreasonable certainty may just reflect the fact that you’re not offering anything new (or if you are, it’s not clearly expressed) and others have already thought it over and decided that “complexity based moral values” is a dead end. If you don’t want to take their word for it and find their explanations unsatisfactory, you’ll just have to push ahead yourself and come back when you have stronger and/or clearer arguments (or decide that they’re right after all).
Where?
And this community gets the impression that they are dealing with what amounts to a straw-man generator. Let’s agree to disagree.
Please do. As you have said, you can expect to achieve more social success with your preferred behaviors if you execute them in different social hierarchies. And success here would require drastically changing how you behave in response to social incentives and local standards—something that you are not willing to do. So of you go elsewhere everybody wins. You can continue to believe you are superior to us and all disagreement with you is the result of us being brainwashed or inferior or whatever and we can go about and have more enjoyable conversations.
Really, you don’t need to write a whole series of comments to ‘break up with us’. You can just click the logout button and type a new address into the address bar. Parting declarations of superiority don’t really achieve much.
I thought Dmytry sometimes has interesting ideas, and it’d be worth trying to convince him to stick around but be more careful and less adversarial. As orthonormal said, LW needs better contrarians, and Dmytry seems like one of the more promising candidates. Why tell him to go away? Do you think my effort was doomed or counterproductive?
There is some potential there—Dmytry has what seems to be a decent IQ and some technical knowledge in there somewhere. But the indications suggest that he has more potential to be destructive than useful. I would expect him to end up as a XiXiDu only far more powerful (more intelligent and rhetorically proficient) and far more hostile (XiXiDu’s attitude hovers just on the border, Dmytry given time would be more consistently hostile).
His idea, I merely agree that it would benefit him and us. For what it is worth I don’t think my agreement is likely to encourage him to leave. If anything he would be inclined to do the opposite of what my preference is.
In terms of my own personal interests—I incur a cost when there are people like Dmytry around. My nature (and considered, self-endorsed nature at that) is such that when I see people try to intellectually bully others with disingenuous non-sequiturs and straw men I am naturally inclined to interfere. Dmytry is far from the worst I’ve seen in this regard but he’s not too far down the list.
If the guy wants to leave and has concluded we are too toxic for him then I’m not going to argue with that. It seems better for everyone. Arrogant nerds are a dime a dozen—we have plenty around here so don’t need another. And communities where one can show off technical competence and rhetorical flair are a dime a dozen too so Dmytry doesn’t need us. I’d recommend he try MENSA. He would fit in well (based on what I recall of my time there and what I have seen of Dmytry.)
Doomed.
Why did you do it then?
Sigh… I should probably just let it go, given that it was a long shot anyway, but it’s kind of frustrating to have put in the effort, and not even get a clean negative result back as evidence.
Perhaps you could let this one go but tell us how to catch the next one?
I can’t say I noticed anything worthwhile. What has Dmytry said that you regard as promising?
Well, he has written 9 discussion posts with >10 karma in the last 4 months or so. Do you not like any of them? Or think of it this way: if he is the kind of person we want to drive away instead of help better fit into our community, then where are we going to find those “better contrarians”?
Looking through his posts, most are downvoted, and the bulk of his karma seems to be coming from a conjunction fallacy post which says nothing new that wasn’t covered in previous posts by say Eliezer (or myself, in prediction-related posts), and another content-less post composed pretty much just of discussion (of a very low level). Brain shrinkage was a good topic, but unlike my essay on similar topics (covering brain shrinkage as a special case), Dmytry completely fails to bring the references. And so on.
So again, what do you regard as promising?
I don’t want to mention specific posts, since that would probably get me involved in a debate over the exact merits of those posts, but it seems like you missed the two posts with the highest upvotes. And yes, most of his posts are downvoted, but my guess is that it’s easier to teach someone to avoid posting bad ideas than to come up with even semi-good ones.
Anyway, I don’t want to argue too much over this. If, all things considered, you (or wedrifid) don’t think there’s much chance that Dmytry could become someone that would make LW better instead of worse, that’s fine with me. I just wanted to make sure it was a considered decision on wedrifid’s part to push Dmytry to leave, and not just an emotional reaction.
Considered and strategic but not committed to and considered without awareness of your degree of personal interest. In such a circumstance if I knew there was someone with a particular interest in working with a (shall we call them ‘candidate’?) I would stand back and refrain from replying or interacting with the candidate except in those circumstances where they are directly hampering the contributions of others.
When it comes to handling such situations better in the future it occurs to me that the material you have written already in your various comments here would make a decent post (“How to be a productive contrarian?”). If that were available as a post then when when the next guy came along and started saying “You guys disagree with me therefore you are all a bunch of brainwashed group thinking fools” we could fog and say “It’s true, there is plenty of group think on lesswrong. Wei_Dai wrote this post on how he manages it.” That would be equally as true as the response “You’re actually getting downvoted because you’re wrong and acting like a dick. STFU.” but far more useful!
In fact, your advice (including what to do instead of worrying about ‘fairness’) generalizes well to dealing with new challenging social situations of all kinds.
In my experience you don’t find ‘better contrarians’ among people who are naturally contrary and have a chip on their shoulder. A good contrarian will mostly agrees with stuff (unless the community they are in really is defective) - but thinks things through and then carefully presents their contrary positions as though they are making a natural contribution.
Don’t seek the contrariness. Seek good thinking and willingness to contribute. You get the contrarian positions for free when the generally good thinking gets results. For example you get lukeprog.
But everyone else is actually stupid.