If you are right and I am incorrect about the merits of such comments then I would consider myself so fundamentally confused when reasoning about the quality of comments like those that anything I have to say about that topic really is almost worthless.
You may well be right about the merits of comments like that, but wrong about the situation being very political. Maybe people are refraining from voting comments like it down because they do not recognize their low merit, rather than because of political affiliations. On the other hand, if you are wrong about the quality of those comments, saying what you have to say is still not worthless because by doing so you may be convinced that you are wrong (e.g., if you explained your reasons fully then someone could perhaps point out a flaw in them that you missed before), which would be a benefit to yourself as well as to the LW community.
So I don’t think this is a good reason for stopping. What would be a good reason is if there’s a good chance you’ll actually collect and organize what you have to say into a post, in which I’ll be patient and look forward to it.
Maybe people are refraining from voting comments like it down because they do not recognize their low merit,
Yes, I believe that they don’t recognize the low merit.
On the other hand, if you are wrong about the quality of those comments, saying what you have to say is still not worthless because by doing so you may be convinced that you are wrong (e.g., if you explained your reasons fully then someone could perhaps point out a flaw in them that you missed before), which would be a benefit to yourself as well as to the LW community.
An expected utility calculation applies and my estimation is that I have erred on the side of too much explaining, not too little.
What would be a good reason is if there’s a good chance you’ll actually collect and organize what you have to say into a post, in which I’ll be patient and look forward to it.
Another good reason would be that I find arguing with you about what posts should be made to be both fruitless and unpleasant. I find that the difference in preferences, assumptions and beliefs constitute an inferential distance that does not seem to be successfully crossed—I don’t find I learn anything from your exhortations and don’t expect to convince you of anything either. Note that I applied rudimentary tact and mentioned only the contextual reason because no matter how many caveats I include it is always going to come across as more personal and rude than I intend to be (where that intent would be the minimum possible given significant disagreement).
Since this is something of a pattern you should note that a tendency to make it difficult to end conversations with you gracefully makes it less practical to engage in such conversations in the first place. Let’s assume that you are right and the reason expressed for withdrawing was a bad one—for emphasis, let’s even assume that for some reason me ending a particular conversation is both epistemically and instrumentally irrational as well as immoral. Even in such a case you choosing push a frame where I should continue a conversation or should explain myself to you or others would still give incentive to avoid the conversation if my foresight allows, to avoid the awkwardness and anticipated social cost.
What I am saying is that there is a tradeoff to making comments like the parent. It may achieve some goals that you could have (persuasion of someone regarding the wrongess of ending a particular conversation perhaps) but come with the cost or reducing the likelyhood of future engagement. Whether that trade off is worth it depends on your preferences and what you are trying to achieve.
Ok, I think I figured it out. It seems rather obvious in retrospect and I’m not sure what took me so long.
You have a very different view of the current state of LW than I do. Whereas I see mostly reasonable efforts at truth seeking with only occasional forays into politics, you see a lot more social aggression and political fights. Whereas I think komponisto’s comment was at worst making a honestly mistaken point or asking a badly phrased question, you interpret it as dark arts and/or social aggression, and think that the appropriate response is a counterattack/punishment, which is good for LW because it would deter such aggression/dark arts from him and others in the future. I guess that from your perspective, “fictional” serves as such a counterattack/punishment, whereas my suggested answer would only blunt his attack but not deliver a counter-punch.
If my guess is correct, I’m quite alarmed. Your view of LW has the potential to become a self-fulfilling prophecy, because if you are wrong about the current state of LW, by treating others as enemies when they are just honestly mistaken or phrasing things badly, you’re making them into enemies and politicizing discussions that weren’t political to begin with. Furthermore you’re a very prolific commenter and viewed as a role model by a significant number of other LWers who may adopt your assessment and imitate your behavior, thereby creating a downward spiral of LW culture.
I would urge you to reconsider, but since you don’t like my exhortations, I feel like I should at least indicate to others that there is significant disagreement about whether your assessment and behavior are normative.
Whereas I see mostly reasonable efforts at truth seeking with only occasional forays into politics, you see a lot more social aggression and political fights
Did the fictional Joker matter have something to do with politics? Am I missing something? Or do you mean politics in the sense of “Activities concerned with the acquisition or exercise of authority or status”?
Question: is it your sense that wedrifid views LessWrong as unusually ridden with social aggression, or views komponisto’s comment as demonstrating exceptional social aggression? Or merely that he views these things as containing social aggression, like most forums and exchanges?
Question: is it your sense that wedrifid views LessWrong as unusually ridden with social aggression, or views komponisto’s comment as demonstrating exceptional social aggression? Or merely that he views these things as containing social aggression, like most forums and exchanges?
As an answer to the slightly different question of what Wedrifid sees himself seeing, it would be probably less than most forums and in general typical of human interactions. In fact, seeing a human community without any social aggression would just be creepy and probably poorly functioning unless the humans were changed in all sorts of ways to compensate.
(nods) FWIW, I’m entirely unsurprised by this. What I’m not quite sure of is whether Wei Dai shares our view of what you believe in this space. I’m left with a niggling suspicion that you and he are not using certain key terms equivalently.
I disagree with Wei Dai on all points in the parent and find his misrepresentation of me abhorrent (even though he is quite likely to be sincere). I hope that Wei Dai’s ability to persuade others of his particular mind-reading conclusion is limited. My most practical course of action—and the one I will choose to take—seems to be that of harm minimisation. I will not engage with—or, in particular, defend myself against—challenges by Wei Dai beyond a one sentence reply per thread if that happens to be necessary.
I feel like I should at least indicate to others that there is significant disagreement
I have been making this point from the start. That which Wei Dai chooses to most actively and strongly defend tends to be things that are bad for the site (see the aggressive encouragement of certain kinds of ‘contrarians’ in particular). I also acknowledged that Wei Dai’s perspective would almost certainly be the reverse.
I disagree with Wei Dai on all points in the parent and find his misrepresentation of me abhorrent (even though he is quite likely to be sincere). I hope that Wei Dai’s ability to persuade others of his particular mind-reading conclusion is limited. My most practical course of action—and the one I will choose to take—seems to be that of harm minimisation. I will not engage with—or, in particular, defend myself against—challenges by Wei Dai beyond a one sentence reply per thread if that happens to be necessary.
I’m confused. I expect saying “your interpretation of my model of LW is wrong, I’m not seeing that much of political fighting on LW” would be sufficient for changing Wei’s mind. As it is, your responses appear to be primarily about punishing the very voicing of (incorrect) guesses about your (and others’) beliefs or motives, as opposed to clarifying those beliefs and motives. (The effect it has on me is for example that I’ve just added the “appear to be” disclaimer in the preceding sentence, and I’m somewhat afraid of talking to you about your beliefs or motives.)
Why this tradeoff? I’d like the LW culture to be as much on the ask side as possible, and punishing for voicing hypotheses (when they are wrong) seems to push towards the covert uninformed guessing.
I’d like the LW culture to be as much on the ask side as possible, and punishing for voicing hypotheses (when they are wrong) seems to push towards the covert uninformed guessing.
Sort of- punishing guessing also makes the “what are your goals here?” question more attractive relative to the “I think your goals are X. Am I right?” question.
That said, I agree that discouraging voicing hypotheses should be done carefully, because I agree that LW culture should be closer to ask than guess.
The effect it has on me is for example that I’ve just added the “appear to be” disclaimer in the preceding sentence, and I’m somewhat afraid of talking to you about your beliefs or motives.
Thankyou for adding the disclaimer. My motives in that comment were not primarily about punishing the public declaration of false, negative motives and instead the following of practical incentives I spent three whole paragraphs patiently explaining in the preceeding comment. It would have been worse to make an unqualafied public declaration that my motives were that which they were not, in direct contradiction to my explicitly declared reasoning, than a qualified one. After all, “appear” is somewhat subjective such that mind of the observer is able to perceive whatever it happens to perceive and your perceptions can constitute a true fact about the world regardless of whether they are accurate perceptions.
I would of course prefer it if people refrained from making declarations about people’s (negative) motives (for the purpose of shaming them) out of courtesy, rather than fear. Yet if you don’t believe courtesy to apply and fear happens to reduce the occurrence that is still a positive outcome.
Note that I take little to no offense at you telling people that I am motivated to punish instances of the act “mind read negative motives in others then publicly declare them” because I would endorse that motive in myself and others if they happen to have it. The only reason the grandparent wasn’t an instance of that (pro-social) kind of punishment was because there were higher priorities at the time.
I recently made the observation:
That’s an untenable interpretation of the written words and plain rude. (Claiming to have) mind read negative beliefs and motives in others then declaring them publicly tends to be frowned upon. Certainly it is frowned upon me.
That is something I strongly endorse. It is a fairly general norm in the world at large (or, to be technical, there is a norm that such a thing is only to be done to enemies and is a defection against allies). I consider that to be a wise and practical norm. Thinking that it can be freely abandoned and that such actions wouldn’t result in negative side effects strikes me as naive.
I took it as a personal favor when the user I was replying to in the above removed the talk about some motives that I particularly didn’t want to be associated with (and couldn’t plausibly have been executing). (If I recall the declared motives there implied weakness and stupidity, both of which are more objectionable to me than merely being called ‘evil’.)
punishing for voicing hypotheses (when they are wrong)
People tend to hypothesise negative motives in those they are in conflict with. People also tend to believe what they are told. Communities are much better off when the participants don’t feel free to go around outright declaring (or even just ‘hypothesizing’) that others have motives that they should be shamed for—unless there is a particularly strong reason to make an exception. The ability to criticize actual external behavior is more than sufficient for most purposes.
From my perspective, what I did was to hypothesize that you had the motive to do good but wrong beliefs. The beliefs I attributed to you in my guess was that komponisto’s comment constituted social aggression and/or dark arts, and therefore countering/punishing it would be good for LW.
I do not understand in what sense I hypothesized “negative motives” in you or where I said or implied that you should be shamed (except in the sense that having systematically wrong beliefs might be considered shameful in a community that prides itself on its rationality, but I’m guessing that’s not what you mean).
You said you didn’t punish me in this instance but that you would endorse doing so, and I bet that many of the people you did punish are in the same bewildered position of wondering what they did to deserve it, and have little idea how they’re supposed to avoid such punishments, except by avoiding drawing your attention. The fact that
you do not have just one pet peeve but a number of them,
your frequent refusals to explain your beliefs and motives when asked,
your tendency to further punish people for more perceived wrongs while they are trying to understand what they did wrong or trying to explain why you may be mistaken about their wrongness, and
your apparent akrasia regarding making posts that might explain how others could avoid being punished by you,
All of these do not help. And I note that since you like to defend people besides yourself against perceived wrongs, there is no reliable way to avoid drawing your attention except by not posting or commenting.
EDIT: This reply applies to a previous version of the parent. I’m not sure whether it applies to the current version since just a glance at the new bulleted list was too much.
From my perspective, what I did was to hypothesize that you had the motive to do good but wrong beliefs.
Yes, were I to have actually objected in this manner to you comment I clearly would have objected to the attribution of “false beliefs result in ” based on untenable mind-reading and not “sinister motives”. You will note that Vladimir referred to both. As it happens I was not executing punishment of either kind and so chose to discuss insinuation of false motives rather than insinuation of toxic beliefs because objecting to the former was the stance I had already taken recently and is the one most significantly objectionable.
You will note that “punishment” here refers to nothing more than labeling a thing and saying it is undesirable. In recent context it refers to the following, in response to some rather… dramatic and inflammatory motives:
That’s an untenable interpretation of the written words and plain rude. (Claiming to have) mind read negative beliefs and motives in others then declaring them publicly tends to be frowned upon. Certainly it is frowned upon by me.
I do endorse such a response. It is a straightforward and rather clearly explained assertion of boundaries. Yes, a technical analysis of the social implications makes such boundary assertion and the labeling of behaviors as ‘rude’ entails a form of ‘punishment’.
This is an (arguably) nuanced and low level analysis of how social behaviors work and I note that by the same analysis your own comments tend to be heavily riddled with both punishments and threats. Since this is an area where you use words differently and tend to make objections in response to low level analysis I will note explicitly that under more typical definitions of ‘punishment’ that would not describe your behavior as frequently having the social implication of punishment I would also reject that word applying to most of what I do.
You said you didn’t punish me in this instance but that you would endorse doing so, and I bet that many of the people you did punish are in the same bewildered position of wondering what they did to deserve it
I assert that there is no instance where I have ‘punished’ people for accusing me of believing things or having motives that I do not have where I have not been abundantly clear about what I am objecting to. Because not only is this not something that comes up frequently the punishment consists of nothing more than the explanation itself. This can plausibly be described as ‘punishment’ in as much as it entails providing potentially negative utility in response to undesired stimulus but if that punishment is recognized as punishment then the meaning is already clear.
Your frequent refusals to explain your beliefs and motives when asked
No Wei. I give an excessive amount of explanation of motives. In fact it wouldn’t surprise me if I provide more and more detailed explanations of this kind than anyone on the site—partly because I comment frequently but mostly because such things happen to be of abstract decision theoretical interest to me. Once again, I don’t like being forced into a corner where I have to speak plainly about something some would take personally but you really seem set on pushing the issue here. I have already explainedin this thread:
Another good reason would be that I find arguing with you about what posts should be made to be both fruitless and unpleasant. I find that the difference in preferences, assumptions and beliefs constitute an inferential distance that does not seem to be successfully crossed—I don’t find I learn anything from your exhortations and don’t expect to convince you of anything either. Note that I applied rudimentary tact and mentioned only the contextual reason because no matter how many caveats I include it is always going to come across as more personal and rude than I intend to be (where that intent would be the minimum possible given significant disagreement).
“The definition of insanity” may be hyperbole but it remains the case that doing the same thing again and again while expecting different results is foolish. I sincerely believe that explanations to you specifically have next to no chance of achieving a desired goal and that giving them to you will continue to be detrimental to me, as I have found it to be in the past. For example the parent primes people to apply interpretations to my comments that I consider ridiculous. All your other comments in this thread can be presumed to have some influence in that direction as well, making it more difficult to make people correctly interpret my words in the future and generally interfering with my ability to communicate successfully. If I didn’t reply to you I would not have given you a platform from which to speak and influence others. You would have just been left with your initial comment and if you had kept making comments like “Non-explanatory punisher!” without me engaging you would have just looked like a stalker.
Anyhow it would seem that my unfortunate bias to explain myself when it would be more rational to ignore has struck again.
the punishment consists of nothing more than the explanation itself
You do explain things, but simultaneously you express judgment about the error, which distracts (and thereby detracts) from the explanation. It doesn’t seem to be the case that the punishment consists only of the explanation. An explanation would be stating things like “I don’t actually believe this”, while statements like “Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up.” communicate your judgment about the error, which is additional information that is not particularly useful as part of the explanation of the error. Also, discussing the nature of the error would be even more helpful than stating what it is, for example in the same thread Wei still didn’t understand his error after reading your comment, while Vaniver’s follow-up clarified it nicely: “his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences” (with some flaws, like saying “best”/”worst”, but this is beside the point).
You will note that Vladimir referred to both.
(I didn’t refer to either, I was speaking more generally than this particular conversation. Note how this is an explanation of the way in which your guess happens to be wrong, which is distinct from saying things like “your claims to having mind-reading abilities are abhorrent” etc.)
statements like “Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them
Are significant. It does matter whether or not actual words expressed are being ignored or overwhelmed by insinuations and ‘hypotheses’ that the speaker believes and would have others believe. It is not-OK to say that people believe things that their words right there in the context say something completely different.
communicate your judgment about the error
Yes, that is intended. The error is a social one for which it is legitimate to claim offense. That is, to judge that the thing should not be done and suggest to observers also consider that said thing should not be done. Please see my earlier explanation regarding why outlawing the claiming of offense for this type of norm violation is considered detrimental (by me and, implicitly, by most civilised social groups). The precise details of how best to claim offense can and should be optimised for best effect. I of course agree that there is much that I could do to convey my intended point in such a way that I am most likely to get my most desired outcomes. Yet this remains an optimisation of how to most effectively convey “No, incompatible, offense”.
I was speaking more generally than this particular conversation.
So was I, with the statement this replies to.
Note how this is an explanation of the way in which your guess happens to be wrong
I understand that, my point is that this is the part of the punishment that explains something other than the object-level error in question, which is the distinction Wei was also trying to make.
(I guess my position on offense is that one should deliberately avoid taking or expressing offense in all situations. There are other modes of social enforcement that don’t have offense’s mind-killing properties.)
I was speaking more generally than this particular conversation.
I guess my position on offense is that one should deliberately avoid taking or expressing offense in all situations. There are other modes of social enforcement that don’t have offense’s mind-killing properties.
That doesn’t seem right, although perhaps you define “offence claiming” more narrowly than I. I’m talking about anything up from making the simple statement “this shouldn’t be done”. Basically the least invasive sort of social intervention I can imagine, apart downvoting and body language indications—but even then my understanding is that is where most communication along the lines of ‘offense taking’ actually happens.
That which Wei Dai chooses to most actively and strongly defend tends to be things that are bad for the site (see the aggressive encouragement of certain kinds of ‘contrarians’ in particular). I also acknowledged that Wei Dai’s perspective would almost certainly be the reverse.
I highly value LessWrong and can’t think of any reasons why I would want to do it harm. Mypastattempts to improve it seems to have met with wide approval (judging from the votes, which are generally much higher than my non-community-related posts), which has caused me to update further in the direction of thinking that my efforts have been helpful instead of harmful.
I understand you don’t want to continue this conversation any further, so I’ll direct the question to others who may be watching this. Does anyone else agree with Wedrifid’s assessment, and if so can you tell me why? If it seems too hard to convince me with object-level arguments, I would also welcome a psychological explanation of why I have this tendency to defend things that are bad for LW. I promise to do my best not to be offended by any proposed explanations.
I highly value LessWrong and can’t think of any reasons why I would want to do it harm.
Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up. Once again, to be even more clear, Wei Dai’s sincerity and pro-social intent have never been questioned. Indeed, I riddled the entire preceding conversation from my first reply onward with constant disclaimers to that effect to the extent that I would have considered any more to be outright spamming.
Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up.
I’m saying that I can’t think of any reasons, including subconscious reasons, why I might want to do it harm. It seems compatible with your words that I have no conscious reasons but do have subconscious reasons.
I’m saying that I can’t think of any reasons, including subconscious reasons, why I might want to do it harm. It seems compatible with your words that I have no conscious reasons but do have subconscious reasons.
I suspect his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences.
I suspect his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences.
Or, far more likely, having the best motives and getting slightly bad consequences. Having the worst consequences is like getting 0 on a multiple-choice test or systematically losing to an efficient market. Potentially as hard as getting the best consequences and a rather impressive achievement in itself.
I honestly lost track of what you and wedrifid were arguing about way back when. It had something to do with whether “fictional” was a useful response to someone asking about how to categorize characters like the Joker when it comes to the specifics of their psychological quirks, IIRC, although I may be mistaking the salient disagreement for some other earlier disagreement (or perhaps a later one).
Somewhere along the line I got the impression that you believe wedrifid’s behavior drags down the general quality of discourse on the site (either on net, or relative to some level of positive contribution you think he would be capable of if he changed his behavior, I’m not sure which) by placing an undue emphasis on describing on-site social patterns in game-theoretical terms. I agree that wedrifid consistently does this but I don’t consider it a negative thing, personally.
[EDIT: To clarify, I agree that wedrifid consistently describes on-site social patterns in game-theoretical terms; I don’t agree with “undue emphasis”]
I do think he’s more abrupt and sometimes rude (in conventional social terms) in his treatment of some folks on this site than I’d prefer, and that a little more consistent kindness would make me more comfortable. Then again, I think the same thing of a lot of people, including most noticeably Eliezer; if the concern is that he’s acting as some kind of poor role model in so doing, I think that ship sailed with or without wedrifid.
I’m less clear on what wedrifid’s objection to your behavior is, exactly, or how he thinks it damages the site. I do think that Vaniver’s characterization of what his objection is is more accurate than your earlier one was.
[EDIT: Reading this comment, it seems one of the things he objects to is you opposing his opposition to engaging with Dmitry. For my own part, I think engaging with Dmitry was a net negative for the site. Whether opposing opposition to Dmitry is also a net negative, I don’t really know, but it’s certainly plausible.]
I realize this isn’t really an answer to your question, but it’s the mental model I’ve got, and since you seem rather insistent on getting some sort of input on this I figured I’d give you what I have. Feel free to ask followup questions if you like. (Or not.)
Then again, I think the same thing of a lot of people, including most noticeably Eliezer; if the concern is that he’s acting as some kind of poor role model in so doing, I think that ship sailed with or without wedrifid.
The difference between Eliezer and wedrifid is that wedrifid endorses his behavior much more strongly and frequently. With Eliezer, one might think it’s just a personality quirk, or an irrational behavioral tendency that’s an unfortunate side effect of having high status, and hence not worthy of imitation.
I do think that Vaniver’s characterization of what his objection is is more accurate than your earlier one was.
I didn’t mean to sound very confident (if I did) about my guess of his objection. My first guess was that he and I had a disagreement over how LW currently works, but then he said “I disagree with Wei Dai on all points in the parent” which made me update towards this alternative explanation, which he has also denied, so now I guess the reason is a disagreement over how LW works, but not the one that I specifically gave. (In case someone is wondering why I keep guessing instead of asking, it’s because I already asked and wedrifid didn’t want to answer, even privately.)
Feel free to ask followup questions if you like.
Thanks! What I’m most anxious to know at this point is whether I have some sort of misconception about the social dynamics on LW that causes me to consistently act in ways that are harmful to LW. Do you have any thoughts on that?
The difference between Eliezer and wedrifid is that wedrifid endorses his behavior much more strongly and frequently.
I certainly agree with you about frequently. I have to think more about strongly, but off hand I’m inclined to disagree. I would agree that wedrifid does it more explicitly, but that isn’t the same thing at all.
whether I have some sort of misconception about the social dynamics on LW that causes me to consistently act in ways that are harmful to LW. Do you have any thoughts on that?
Haven’t a clue. I’m not really sure what “harmful to LW” even means.
Perhaps unpacking that phrase is a place to start. What do you think harms the site? What do you think benefits it?
The difference needn’t lie in your motives, conscious or unconscious. You might simply have bad theories about how groups develop. (A possibility: your tendency to understate the role of social signaling in what sometimes pretends to be an objective search for truth.)
But your blindness to potential motives is also problematic—and not just because of the motives themselves, if they exist. For an example of a motive, you might have an anti-E.Y. motive because he hasn’t taken your ideas on the Singularity as seriously as you think they deserve—giving much more attention to a hack job from GiveWell.
Well, you wanted a possible example. There are always possible examples.
I’m saying that I can’t think of any reasons, including subconscious reasons, why I might want to do it harm. It seems compatible with your words that I have no conscious reasons but do have subconscious reasons.
Let it be known that I, Wedrifid, at this time and at this electronic location do declare that I do not believe that Wei Dai has conscious or unconscious motives to sabotage lesswrong. Indeed the thought is so bizarre and improbable that it was never even considered as a possibility by my search algorithm until Wei brought it up.
It really seems much more likely to me that Wei really did think that chastising those who tried to prevent the feeding of Dmytry was going to help the website rather than damage it. I also believe that Wei Dai declaring war on “Fictional” as a response to “What do you call the Joker?” is based on a true, sincere and evidently heartfelt belief that the world would be a better place without “fictional” (or analogous answers) as a reply in similar contexts.
Enemies are almost never innately evil. (Another probably necessary caveat: That word selection is merely a reference to a post that contains the relevant insight. Actual enemy status is not something to be granted so frivolously. Actively considering agents enemies rather than merely obstacles involves a potentially significant trade-off when it comes to optimization and resource allocation and so is best reserved for things that really matter.)
You may well be right about the merits of comments like that, but wrong about the situation being very political. Maybe people are refraining from voting comments like it down because they do not recognize their low merit, rather than because of political affiliations. On the other hand, if you are wrong about the quality of those comments, saying what you have to say is still not worthless because by doing so you may be convinced that you are wrong (e.g., if you explained your reasons fully then someone could perhaps point out a flaw in them that you missed before), which would be a benefit to yourself as well as to the LW community.
So I don’t think this is a good reason for stopping. What would be a good reason is if there’s a good chance you’ll actually collect and organize what you have to say into a post, in which I’ll be patient and look forward to it.
Yes, I believe that they don’t recognize the low merit.
An expected utility calculation applies and my estimation is that I have erred on the side of too much explaining, not too little.
Another good reason would be that I find arguing with you about what posts should be made to be both fruitless and unpleasant. I find that the difference in preferences, assumptions and beliefs constitute an inferential distance that does not seem to be successfully crossed—I don’t find I learn anything from your exhortations and don’t expect to convince you of anything either. Note that I applied rudimentary tact and mentioned only the contextual reason because no matter how many caveats I include it is always going to come across as more personal and rude than I intend to be (where that intent would be the minimum possible given significant disagreement).
Since this is something of a pattern you should note that a tendency to make it difficult to end conversations with you gracefully makes it less practical to engage in such conversations in the first place. Let’s assume that you are right and the reason expressed for withdrawing was a bad one—for emphasis, let’s even assume that for some reason me ending a particular conversation is both epistemically and instrumentally irrational as well as immoral. Even in such a case you choosing push a frame where I should continue a conversation or should explain myself to you or others would still give incentive to avoid the conversation if my foresight allows, to avoid the awkwardness and anticipated social cost.
What I am saying is that there is a tradeoff to making comments like the parent. It may achieve some goals that you could have (persuasion of someone regarding the wrongess of ending a particular conversation perhaps) but come with the cost or reducing the likelyhood of future engagement. Whether that trade off is worth it depends on your preferences and what you are trying to achieve.
Ok, I think I figured it out. It seems rather obvious in retrospect and I’m not sure what took me so long.
You have a very different view of the current state of LW than I do. Whereas I see mostly reasonable efforts at truth seeking with only occasional forays into politics, you see a lot more social aggression and political fights. Whereas I think komponisto’s comment was at worst making a honestly mistaken point or asking a badly phrased question, you interpret it as dark arts and/or social aggression, and think that the appropriate response is a counterattack/punishment, which is good for LW because it would deter such aggression/dark arts from him and others in the future. I guess that from your perspective, “fictional” serves as such a counterattack/punishment, whereas my suggested answer would only blunt his attack but not deliver a counter-punch.
If my guess is correct, I’m quite alarmed. Your view of LW has the potential to become a self-fulfilling prophecy, because if you are wrong about the current state of LW, by treating others as enemies when they are just honestly mistaken or phrasing things badly, you’re making them into enemies and politicizing discussions that weren’t political to begin with. Furthermore you’re a very prolific commenter and viewed as a role model by a significant number of other LWers who may adopt your assessment and imitate your behavior, thereby creating a downward spiral of LW culture.
I would urge you to reconsider, but since you don’t like my exhortations, I feel like I should at least indicate to others that there is significant disagreement about whether your assessment and behavior are normative.
Did the fictional Joker matter have something to do with politics? Am I missing something? Or do you mean politics in the sense of “Activities concerned with the acquisition or exercise of authority or status”?
Question: is it your sense that wedrifid views LessWrong as unusually ridden with social aggression, or views komponisto’s comment as demonstrating exceptional social aggression? Or merely that he views these things as containing social aggression, like most forums and exchanges?
As an answer to the slightly different question of what Wedrifid sees himself seeing, it would be probably less than most forums and in general typical of human interactions. In fact, seeing a human community without any social aggression would just be creepy and probably poorly functioning unless the humans were changed in all sorts of ways to compensate.
(nods) FWIW, I’m entirely unsurprised by this. What I’m not quite sure of is whether Wei Dai shares our view of what you believe in this space. I’m left with a niggling suspicion that you and he are not using certain key terms equivalently.
This is almost certainly the case, and one of the things that made conversation difficult.
I disagree with Wei Dai on all points in the parent and find his misrepresentation of me abhorrent (even though he is quite likely to be sincere). I hope that Wei Dai’s ability to persuade others of his particular mind-reading conclusion is limited. My most practical course of action—and the one I will choose to take—seems to be that of harm minimisation. I will not engage with—or, in particular, defend myself against—challenges by Wei Dai beyond a one sentence reply per thread if that happens to be necessary.
I have been making this point from the start. That which Wei Dai chooses to most actively and strongly defend tends to be things that are bad for the site (see the aggressive encouragement of certain kinds of ‘contrarians’ in particular). I also acknowledged that Wei Dai’s perspective would almost certainly be the reverse.
I’m confused. I expect saying “your interpretation of my model of LW is wrong, I’m not seeing that much of political fighting on LW” would be sufficient for changing Wei’s mind. As it is, your responses appear to be primarily about punishing the very voicing of (incorrect) guesses about your (and others’) beliefs or motives, as opposed to clarifying those beliefs and motives. (The effect it has on me is for example that I’ve just added the “appear to be” disclaimer in the preceding sentence, and I’m somewhat afraid of talking to you about your beliefs or motives.)
Why this tradeoff? I’d like the LW culture to be as much on the ask side as possible, and punishing for voicing hypotheses (when they are wrong) seems to push towards the covert uninformed guessing.
Sort of- punishing guessing also makes the “what are your goals here?” question more attractive relative to the “I think your goals are X. Am I right?” question.
That said, I agree that discouraging voicing hypotheses should be done carefully, because I agree that LW culture should be closer to ask than guess.
Thankyou for adding the disclaimer. My motives in that comment were not primarily about punishing the public declaration of false, negative motives and instead the following of practical incentives I spent three whole paragraphs patiently explaining in the preceeding comment. It would have been worse to make an unqualafied public declaration that my motives were that which they were not, in direct contradiction to my explicitly declared reasoning, than a qualified one. After all, “appear” is somewhat subjective such that mind of the observer is able to perceive whatever it happens to perceive and your perceptions can constitute a true fact about the world regardless of whether they are accurate perceptions.
I would of course prefer it if people refrained from making declarations about people’s (negative) motives (for the purpose of shaming them) out of courtesy, rather than fear. Yet if you don’t believe courtesy to apply and fear happens to reduce the occurrence that is still a positive outcome.
Note that I take little to no offense at you telling people that I am motivated to punish instances of the act “mind read negative motives in others then publicly declare them” because I would endorse that motive in myself and others if they happen to have it. The only reason the grandparent wasn’t an instance of that (pro-social) kind of punishment was because there were higher priorities at the time.
I recently made the observation:
That is something I strongly endorse. It is a fairly general norm in the world at large (or, to be technical, there is a norm that such a thing is only to be done to enemies and is a defection against allies). I consider that to be a wise and practical norm. Thinking that it can be freely abandoned and that such actions wouldn’t result in negative side effects strikes me as naive.
I took it as a personal favor when the user I was replying to in the above removed the talk about some motives that I particularly didn’t want to be associated with (and couldn’t plausibly have been executing). (If I recall the declared motives there implied weakness and stupidity, both of which are more objectionable to me than merely being called ‘evil’.)
People tend to hypothesise negative motives in those they are in conflict with. People also tend to believe what they are told. Communities are much better off when the participants don’t feel free to go around outright declaring (or even just ‘hypothesizing’) that others have motives that they should be shamed for—unless there is a particularly strong reason to make an exception. The ability to criticize actual external behavior is more than sufficient for most purposes.
From my perspective, what I did was to hypothesize that you had the motive to do good but wrong beliefs. The beliefs I attributed to you in my guess was that komponisto’s comment constituted social aggression and/or dark arts, and therefore countering/punishing it would be good for LW.
I do not understand in what sense I hypothesized “negative motives” in you or where I said or implied that you should be shamed (except in the sense that having systematically wrong beliefs might be considered shameful in a community that prides itself on its rationality, but I’m guessing that’s not what you mean).
You said you didn’t punish me in this instance but that you would endorse doing so, and I bet that many of the people you did punish are in the same bewildered position of wondering what they did to deserve it, and have little idea how they’re supposed to avoid such punishments, except by avoiding drawing your attention. The fact that
you do not have just one pet peeve but a number of them,
your frequent refusals to explain your beliefs and motives when asked,
your tendency to further punish people for more perceived wrongs while they are trying to understand what they did wrong or trying to explain why you may be mistaken about their wrongness, and
your apparent akrasia regarding making posts that might explain how others could avoid being punished by you,
All of these do not help. And I note that since you like to defend people besides yourself against perceived wrongs, there is no reliable way to avoid drawing your attention except by not posting or commenting.
EDIT: This reply applies to a previous version of the parent. I’m not sure whether it applies to the current version since just a glance at the new bulleted list was too much.
Yes, were I to have actually objected in this manner to you comment I clearly would have objected to the attribution of “false beliefs result in ” based on untenable mind-reading and not “sinister motives”. You will note that Vladimir referred to both. As it happens I was not executing punishment of either kind and so chose to discuss insinuation of false motives rather than insinuation of toxic beliefs because objecting to the former was the stance I had already taken recently and is the one most significantly objectionable.
You will note that “punishment” here refers to nothing more than labeling a thing and saying it is undesirable. In recent context it refers to the following, in response to some rather… dramatic and inflammatory motives:
I do endorse such a response. It is a straightforward and rather clearly explained assertion of boundaries. Yes, a technical analysis of the social implications makes such boundary assertion and the labeling of behaviors as ‘rude’ entails a form of ‘punishment’.
This is an (arguably) nuanced and low level analysis of how social behaviors work and I note that by the same analysis your own comments tend to be heavily riddled with both punishments and threats. Since this is an area where you use words differently and tend to make objections in response to low level analysis I will note explicitly that under more typical definitions of ‘punishment’ that would not describe your behavior as frequently having the social implication of punishment I would also reject that word applying to most of what I do.
I assert that there is no instance where I have ‘punished’ people for accusing me of believing things or having motives that I do not have where I have not been abundantly clear about what I am objecting to. Because not only is this not something that comes up frequently the punishment consists of nothing more than the explanation itself. This can plausibly be described as ‘punishment’ in as much as it entails providing potentially negative utility in response to undesired stimulus but if that punishment is recognized as punishment then the meaning is already clear.
No Wei. I give an excessive amount of explanation of motives. In fact it wouldn’t surprise me if I provide more and more detailed explanations of this kind than anyone on the site—partly because I comment frequently but mostly because such things happen to be of abstract decision theoretical interest to me. Once again, I don’t like being forced into a corner where I have to speak plainly about something some would take personally but you really seem set on pushing the issue here. I have already explained in this thread:
“The definition of insanity” may be hyperbole but it remains the case that doing the same thing again and again while expecting different results is foolish. I sincerely believe that explanations to you specifically have next to no chance of achieving a desired goal and that giving them to you will continue to be detrimental to me, as I have found it to be in the past. For example the parent primes people to apply interpretations to my comments that I consider ridiculous. All your other comments in this thread can be presumed to have some influence in that direction as well, making it more difficult to make people correctly interpret my words in the future and generally interfering with my ability to communicate successfully. If I didn’t reply to you I would not have given you a platform from which to speak and influence others. You would have just been left with your initial comment and if you had kept making comments like “Non-explanatory punisher!” without me engaging you would have just looked like a stalker.
Anyhow it would seem that my unfortunate bias to explain myself when it would be more rational to ignore has struck again.
You do explain things, but simultaneously you express judgment about the error, which distracts (and thereby detracts) from the explanation. It doesn’t seem to be the case that the punishment consists only of the explanation. An explanation would be stating things like “I don’t actually believe this”, while statements like “Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up.” communicate your judgment about the error, which is additional information that is not particularly useful as part of the explanation of the error. Also, discussing the nature of the error would be even more helpful than stating what it is, for example in the same thread Wei still didn’t understand his error after reading your comment, while Vaniver’s follow-up clarified it nicely: “his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences” (with some flaws, like saying “best”/”worst”, but this is beside the point).
(I didn’t refer to either, I was speaking more generally than this particular conversation. Note how this is an explanation of the way in which your guess happens to be wrong, which is distinct from saying things like “your claims to having mind-reading abilities are abhorrent” etc.)
Are significant. It does matter whether or not actual words expressed are being ignored or overwhelmed by insinuations and ‘hypotheses’ that the speaker believes and would have others believe. It is not-OK to say that people believe things that their words right there in the context say something completely different.
Yes, that is intended. The error is a social one for which it is legitimate to claim offense. That is, to judge that the thing should not be done and suggest to observers also consider that said thing should not be done. Please see my earlier explanation regarding why outlawing the claiming of offense for this type of norm violation is considered detrimental (by me and, implicitly, by most civilised social groups). The precise details of how best to claim offense can and should be optimised for best effect. I of course agree that there is much that I could do to convey my intended point in such a way that I am most likely to get my most desired outcomes. Yet this remains an optimisation of how to most effectively convey “No, incompatible, offense”.
So was I, with the statement this replies to.
So no, it isn’t.
I understand that, my point is that this is the part of the punishment that explains something other than the object-level error in question, which is the distinction Wei was also trying to make.
(I guess my position on offense is that one should deliberately avoid taking or expressing offense in all situations. There are other modes of social enforcement that don’t have offense’s mind-killing properties.)
Okay.
That doesn’t seem right, although perhaps you define “offence claiming” more narrowly than I. I’m talking about anything up from making the simple statement “this shouldn’t be done”. Basically the least invasive sort of social intervention I can imagine, apart downvoting and body language indications—but even then my understanding is that is where most communication along the lines of ‘offense taking’ actually happens.
I highly value LessWrong and can’t think of any reasons why I would want to do it harm. My past attempts to improve it seems to have met with wide approval (judging from the votes, which are generally much higher than my non-community-related posts), which has caused me to update further in the direction of thinking that my efforts have been helpful instead of harmful.
I understand you don’t want to continue this conversation any further, so I’ll direct the question to others who may be watching this. Does anyone else agree with Wedrifid’s assessment, and if so can you tell me why? If it seems too hard to convince me with object-level arguments, I would also welcome a psychological explanation of why I have this tendency to defend things that are bad for LW. I promise to do my best not to be offended by any proposed explanations.
Nothing I have said suggests this. Indeed, this is explicitly incompatible with my words as I have written them and it is bizarre that it has come up. Once again, to be even more clear, Wei Dai’s sincerity and pro-social intent have never been questioned. Indeed, I riddled the entire preceding conversation from my first reply onward with constant disclaimers to that effect to the extent that I would have considered any more to be outright spamming.
I’m saying that I can’t think of any reasons, including subconscious reasons, why I might want to do it harm. It seems compatible with your words that I have no conscious reasons but do have subconscious reasons.
I suspect his point is that if you misunderstand the dynamics of the system, then you can both have the best motives and the worst consequences.
Or, far more likely, having the best motives and getting slightly bad consequences. Having the worst consequences is like getting 0 on a multiple-choice test or systematically losing to an efficient market. Potentially as hard as getting the best consequences and a rather impressive achievement in itself.
Ok, so does anyone agree that he is right (that I misunderstand the dynamics of the system), and if so, tell me why?
(sigh) OK, my two cents.
I honestly lost track of what you and wedrifid were arguing about way back when. It had something to do with whether “fictional” was a useful response to someone asking about how to categorize characters like the Joker when it comes to the specifics of their psychological quirks, IIRC, although I may be mistaking the salient disagreement for some other earlier disagreement (or perhaps a later one).
Somewhere along the line I got the impression that you believe wedrifid’s behavior drags down the general quality of discourse on the site (either on net, or relative to some level of positive contribution you think he would be capable of if he changed his behavior, I’m not sure which) by placing an undue emphasis on describing on-site social patterns in game-theoretical terms. I agree that wedrifid consistently does this but I don’t consider it a negative thing, personally.
[EDIT: To clarify, I agree that wedrifid consistently describes on-site social patterns in game-theoretical terms; I don’t agree with “undue emphasis”]
I do think he’s more abrupt and sometimes rude (in conventional social terms) in his treatment of some folks on this site than I’d prefer, and that a little more consistent kindness would make me more comfortable. Then again, I think the same thing of a lot of people, including most noticeably Eliezer; if the concern is that he’s acting as some kind of poor role model in so doing, I think that ship sailed with or without wedrifid.
I’m less clear on what wedrifid’s objection to your behavior is, exactly, or how he thinks it damages the site. I do think that Vaniver’s characterization of what his objection is is more accurate than your earlier one was.
[EDIT: Reading this comment, it seems one of the things he objects to is you opposing his opposition to engaging with Dmitry. For my own part, I think engaging with Dmitry was a net negative for the site. Whether opposing opposition to Dmitry is also a net negative, I don’t really know, but it’s certainly plausible.]
I realize this isn’t really an answer to your question, but it’s the mental model I’ve got, and since you seem rather insistent on getting some sort of input on this I figured I’d give you what I have. Feel free to ask followup questions if you like. (Or not.)
The difference between Eliezer and wedrifid is that wedrifid endorses his behavior much more strongly and frequently. With Eliezer, one might think it’s just a personality quirk, or an irrational behavioral tendency that’s an unfortunate side effect of having high status, and hence not worthy of imitation.
I didn’t mean to sound very confident (if I did) about my guess of his objection. My first guess was that he and I had a disagreement over how LW currently works, but then he said “I disagree with Wei Dai on all points in the parent” which made me update towards this alternative explanation, which he has also denied, so now I guess the reason is a disagreement over how LW works, but not the one that I specifically gave. (In case someone is wondering why I keep guessing instead of asking, it’s because I already asked and wedrifid didn’t want to answer, even privately.)
Thanks! What I’m most anxious to know at this point is whether I have some sort of misconception about the social dynamics on LW that causes me to consistently act in ways that are harmful to LW. Do you have any thoughts on that?
I certainly agree with you about frequently. I have to think more about strongly, but off hand I’m inclined to disagree. I would agree that wedrifid does it more explicitly, but that isn’t the same thing at all.
Haven’t a clue. I’m not really sure what “harmful to LW” even means.
Perhaps unpacking that phrase is a place to start. What do you think harms the site? What do you think benefits it?
The difference needn’t lie in your motives, conscious or unconscious. You might simply have bad theories about how groups develop. (A possibility: your tendency to understate the role of social signaling in what sometimes pretends to be an objective search for truth.)
But your blindness to potential motives is also problematic—and not just because of the motives themselves, if they exist. For an example of a motive, you might have an anti-E.Y. motive because he hasn’t taken your ideas on the Singularity as seriously as you think they deserve—giving much more attention to a hack job from GiveWell.
Well, you wanted a possible example. There are always possible examples.
Let it be known that I, Wedrifid, at this time and at this electronic location do declare that I do not believe that Wei Dai has conscious or unconscious motives to sabotage lesswrong. Indeed the thought is so bizarre and improbable that it was never even considered as a possibility by my search algorithm until Wei brought it up.
It really seems much more likely to me that Wei really did think that chastising those who tried to prevent the feeding of Dmytry was going to help the website rather than damage it. I also believe that Wei Dai declaring war on “Fictional” as a response to “What do you call the Joker?” is based on a true, sincere and evidently heartfelt belief that the world would be a better place without “fictional” (or analogous answers) as a reply in similar contexts.
Enemies are almost never innately evil. (Another probably necessary caveat: That word selection is merely a reference to a post that contains the relevant insight. Actual enemy status is not something to be granted so frivolously. Actively considering agents enemies rather than merely obstacles involves a potentially significant trade-off when it comes to optimization and resource allocation and so is best reserved for things that really matter.)