“When I was young I shoved my ignorance in people’s faces. They beat me with sticks. By the time I was forty my blunt instrument had been honed to a fine cutting point for me. If you hide your ignorance, no one will hit you and you’ll never learn.”
-- Farenheit 451
I’ll be sticking around a while, although I’m not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it’s beautiful). It’s not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That’s probably mostly it.
Don’t insult anyone, ever. If Wagner posts, either say “Hmm, why do you believe Mendelssohn’s music to be derivative?” or silently downvote, but don’t call him an antisemitic piece of shit.
Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
Stick closely to the question and do not involve the personalities of debaters.
Exception to the above: it’s okay to pass judgement on a personality trait if it’s a compliment. If you can’t always avoid insulting people, occasionally complimenting them can help.
A lot of things are insults. You will slip up. This won’t make people dislike you.
If you know what a polite and friendly tone is, have one.
If someone isn’t polite and friendly, it means you need to be more polite and friendly.
If they’re being very rude and mean and it’s getting annoying, you can gently mention it. Still make the rest of your post polite and friendly and about the question.
If the “polite and about the question” part is empty, don’t post.
If you have insulted someone in a thread—either more than once, or once and people are still hostile despite you being extra nice afterwards—people will keep being hostile in the thread and you should probably walk away from it.
If hostility in a thread is leaking into your mood, walk away from the whole site for a little while.
When you post in another thread, people will not hold any grudges against you from previous threads. Sorry for your epic quest, but we don’t have much against you right now.
Apologies (rather than silence) are a good idea if you were clearly in the wrong and not overly tempted to add “but”.
On politeness:
Some politeness norms are stupid and harmful and wrong, like “You must not criticize even if explicitly asked to” or “Disagreement is impolite”. Fortunately, we don’t have these here.
Some are good, like not insulting people. Insulting messages get across poorly. This happens even when people ignore the insult to answer the substance, because the message is overloaded.
Some are mostly local communication protocols that help but can be costly to constrain your message around. It’s okay to drop them if you can’t bear the cost.
Some are about fostering personal liking between people. They’re worthwhile to people who want that and noise to people who don’t.
Taking pains to be polite is training wheels. People who are good with words can say precisely and concisely what they mean in a completely neutral tone. People who aren’t are injecting lots of accidental interpersonal content, so we need to make it harmless explicitly.
People who are exempted:
The aforementioned people, who will never accidentally insult anyone;
People whose contribution is so incredibly awesome that it compensates for being insufferable; I know of a few but none on LessWrong;
wedrifid, who is somehow capable of pleasant interaction while being a complete jerk.
I’ll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid’s behavior is that mostly s/he ignores all of these ad-hoc rules in favor of: 1) paying attention to the status implications of what’s going on, 2) correctly recognizing that attempts to lower someone’s status are attacks 3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
Might be too advanced for someone who just learned that saying “Please stop being stupid.” is a bad idea.
Well… I’ve seen people nearly that exact phrase to great effect at times… But that’s not the sort of thing you’d want to include in a ‘basics’ list either.
Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!
The phrase “social alliances” makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam’s ability to engage in A... ...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance. ...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance. ...if on reflection I reject A and I can’t come to agreement with Sam, I endorse acknowledging that I’ve unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that’s beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it’s failing to reflect on whether I endorse A. If I do neither, then the community doesn’t degenerate into tribal warfare, it degenerates into chaos.
Admittedly, chaos can be more fun, but I don’t really endorse it.
All of that said, I do recognize that explicitly talking about “social alliances” (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn’t help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).
(I feel vaguely like Will_Newsome, now. I wonder if that’s a good thing.)
I feel vaguely like Will_Newsome, now. I wonder if that’s a good thing.
Start to worry if you begin to feel morally obliged to engage in activity ‘Z’ that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
It wasn’t quite as dramatic as you make it sound, but it was certainly fascinating to live through. The general case is here. The specifics… hm. I remain uncomfortable discussing the specifics in public.
if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you’ve mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
Is establishing yourself as a reliable ally an instrumental or terminal goal for you?
Instrumental.
If the former, what advantages does it bring in a group blog / discussion forum like this one?
Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don’t know how I could begin to itemize it. To pick one that came up recently, though, here’s a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals. Another one that comes up far more often is other people’s willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.
The kind of alliance you’ve mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally.
Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.
If you mean to say further that it doesn’t affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo “ally.”
People’s estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don’t engage in discussion, because I no longer trust that they will engage reliably.
Are you hoping to establish other kinds of alliances here?
Not that I can think of, but honestly this question bewilders me, so it’s possible that you’re asking about something I’m not even considering. What kind of alliances do you have in mind?
To pick one that came up recently, though, here’s a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people’s willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.
It’s not clear to me that these attributes are strongly (or even positively) correlated with willingness to “stick up” for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you’re mostly signaling that you’re not timid, with “being a good discussion partner” a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)
What kind of alliances do you have in mind?
I didn’t have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you’re looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.
It’s not clear to me that these attributes are strongly (or even positively) correlated with willingness to “stick up” for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you’re mostly signaling that you’re not timid
This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn’t seem to be a very accurate description of reality. A lot of information—and information I consider important at that—can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive ‘timidity’ can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you’re not timid seems to be a mistake.
In my own experience—from back when I was timid in the extreme—the sort of “sticking up for”, jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.
(This is the impression I have of wedrifid, for example.)
Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave’s model seems far more accurate and useful in this case.
Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave’s model seems far more accurate and useful in this case.
I find that my brain doesn’t automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven’t found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.
I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I’d be curious to find out.
Fair enough; it may be that I overestimate the value of what I’m calling trust here.
Just for my own clarity, when you say that what I’m doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we’ve been discussing on this thread (or are they equivalent)?
I’m not especially looking to make real-life friends, though there are folks here who I wouldn’t mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam’s ability to engage in A... ...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance. ...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance. ...if on reflection I reject A and I can’t come to agreement with Sam, I endorse acknowledging that I’ve unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that’s beside the point here.)
I really like your illustration here. To the extent that this is what you were trying to convey by “3)” in your analysis of wedrifid’s style then I endorse it. I wouldn’t have used the “alliances” description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I’m happy with it as a simple model.
Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of “Sam”, “Pat” and “A”. In particular there are many behaviors “A” that the execution of will immediately place the victim of said behavior into the role of “ally that I am obliged to support”.
Yeah, agreed about the distracting phrasing. I find it’s a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.
Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.
Also, if you have an articulable model of how you make those judgments I’d be interested, especially if it uses more socially acceptable language than mine does.
Edit: Also, I’m really curious as to the reasoning of whoever downvoted that. I commit to preserving that person’s anonymity if they PM me about their reasoning.
I’m really curious as to the reasoning of whoever downvoted that.
For what it is worth, sampling over time suggests multiple people—at one point there were multiple upvotes.
I’m somewhat less curious. I just assumed it people from the ‘green’ social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.
wedrifid, who is somehow capable of pleasant interaction while being a complete jerk
Regardless of whether or not this is compatible with being a “complete jerk” in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one’s other goals (naturally the methods used are community-specific but that is more than good enough).
In saying this, I don’t know whether I’m expanding on your point or disagreeing with it.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I’ve seen so far (your comment, TheOtherDave’s, this comment by wedrifid) are not really forming into a coherent whole for me.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I’ve seen so far (your comment, TheOtherDave’s, this comment by wedrifid) are not really forming into a coherent whole for me.
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
Regardless of whether or not this is compatible with being a “complete jerk” in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one’s other goals (naturally the methods used are community-specific but that is more than good enough).
I appreciate your kind words komponisto! You inspire me to live up to them.
Plus, I like the idea of losing so much karma in one day and then eventually earning it all back
This discussion is off-topic for the “Rationality Quotes” thread, but...
If you’re interested in an easy way to gain karma, you might want to try an experimental method I’ve been kicking around:
Take an article from Wikipedia on a bias that we don’t have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer’s more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
It needs to be a drawn out and painful and embarrassing process.
Oh, you want a Quest, not a goal. :-)
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.
Try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
I nominate this as the Less Wrong Summer Challenge, for everybody.
(One modification I’d make: it shouldn’t necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)
Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a “painful and embarrasing process”, meaning that the ante and risk must be higher.
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
That actually sounds fun now that you put it like that!
I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.
Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.
Once that’s done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a “strategy” onto a run of comments that happened to succeed.
Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)
Have some third party (or several) that LW would trust hold on to it in secrect.
Nitpick: cryptography solves this much more neatly.
Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone’s concerns.
Nitpick: cryptography solves this much more neatly.
But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).
The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects.
What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.
What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.
I mean you still have to give the encrypted data to someone. They can’t tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don’t want the act of giving encrypted fore-notice to influence behavior.
Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.
Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.
Better yet… embed five different predictions in that header. When the time comes, reveal just the one that turned out most correct!
But of four people on LW who would be considered trusted parties, what’s the probability that all four would be quiet after the fifth is called upon to post the prediction or prediction hash?
You’re right, of course. I didn’t think that through. There haven’t been any good “gain the habit of really thinking things through” exercises for a Skill-of-the-Week post, have there?
“Recognizing when you’ve actually thought thoroughly” is the specific failure mode I’m thinking of; but that’s probably highly correlated with recognizing when to start thinking thoroughly.
I feel like such a skill may be difficult to consciously train without a tutor:
Rice’s theorem will tell you that you cannot, without already knowing unknown unknowns, determine which knowledge is safe to ignore.
-- @afoolswisdom
Besides, in the prediction-hash case, they may well not post right away.
Yes, the first thing I thought of was Quirrel’s hashed prediction; but it doesn’t seem that everyone’s forgotten yet, as of last month.
Now that is an interesting concept. I like where this subthread is going.
Interesting comparisons to other systems involving currency come to mind.
EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties… for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--
That’s like getting a black belt in karate by buying one from the martial arts shop. It isn’t karmawhoring unless you’re getting karma from real people who really thought your comments worth upvoting.
“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?
It is good to have one’s comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute’s reward—money—is of some actual use.
It’s a bias, as far as I’m concerned, and something that needs to be overcome. People with egos can be right, but if one can’t deal with the fact that they’re either right or wrong regardless of their egotism, then one is that much slower to update.
It’s not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across.
It is what we would call an “instrumental rationality” problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos… which you seem to be taking steps towards now!
Some users don’t read the HP:MoR threads, and some users only read the HP:MoR threads. You don’t have to feel like you have a reputation here yet. Also, welcome to Less Wrong.
There are threads on other sites (the TVTropes one is the biggest, I think, but I know the xkcd forums have a thread, and I’m sure others do as well). Part of the value of having HP:MoR threads here is it makes it likely that people who come here for the MoR threads will stay for the rest of the site- but I agree that the karma on them is atypical for karma on the site, and decoupling it would have some value (but I suspect higher costs than value).
-- Farenheit 451
I’ll be sticking around a while, although I’m not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it’s beautiful). It’s not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That’s probably mostly it.
Tips for dealing with people with big egos:
Don’t insult anyone, ever. If Wagner posts, either say “Hmm, why do you believe Mendelssohn’s music to be derivative?” or silently downvote, but don’t call him an antisemitic piece of shit.
Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
Stick closely to the question and do not involve the personalities of debaters.
Exception to the above: it’s okay to pass judgement on a personality trait if it’s a compliment. If you can’t always avoid insulting people, occasionally complimenting them can help.
A lot of things are insults. You will slip up. This won’t make people dislike you.
If you know what a polite and friendly tone is, have one.
If someone isn’t polite and friendly, it means you need to be more polite and friendly.
If they’re being very rude and mean and it’s getting annoying, you can gently mention it. Still make the rest of your post polite and friendly and about the question.
If the “polite and about the question” part is empty, don’t post.
If you have insulted someone in a thread—either more than once, or once and people are still hostile despite you being extra nice afterwards—people will keep being hostile in the thread and you should probably walk away from it.
If hostility in a thread is leaking into your mood, walk away from the whole site for a little while.
When you post in another thread, people will not hold any grudges against you from previous threads. Sorry for your epic quest, but we don’t have much against you right now.
Apologies (rather than silence) are a good idea if you were clearly in the wrong and not overly tempted to add “but”.
On politeness:
Some politeness norms are stupid and harmful and wrong, like “You must not criticize even if explicitly asked to” or “Disagreement is impolite”. Fortunately, we don’t have these here.
Some are good, like not insulting people. Insulting messages get across poorly. This happens even when people ignore the insult to answer the substance, because the message is overloaded.
Some are mostly local communication protocols that help but can be costly to constrain your message around. It’s okay to drop them if you can’t bear the cost.
Some are about fostering personal liking between people. They’re worthwhile to people who want that and noise to people who don’t.
Taking pains to be polite is training wheels. People who are good with words can say precisely and concisely what they mean in a completely neutral tone. People who aren’t are injecting lots of accidental interpersonal content, so we need to make it harmless explicitly.
People who are exempted:
The aforementioned people, who will never accidentally insult anyone;
People whose contribution is so incredibly awesome that it compensates for being insufferable; I know of a few but none on LessWrong;
wedrifid, who is somehow capable of pleasant interaction while being a complete jerk.
I’ll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid’s behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what’s going on,
2) correctly recognizing that attempts to lower someone’s status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
Might be too advanced for someone who just learned that saying “Please stop being stupid.” is a bad idea.
Sure. Then again, if you’d only intended that for chaosmosis’ benefit, I assume you’d have PMed it.
Well… I’ve seen people nearly that exact phrase to great effect at times… But that’s not the sort of thing you’d want to include in a ‘basics’ list either.
Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!
The phrase “social alliances” makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam’s ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can’t come to agreement with Sam, I endorse acknowledging that I’ve unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that’s beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it’s failing to reflect on whether I endorse A. If I do neither, then the community doesn’t degenerate into tribal warfare, it degenerates into chaos.
Admittedly, chaos can be more fun, but I don’t really endorse it.
All of that said, I do recognize that explicitly talking about “social alliances” (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn’t help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).
(I feel vaguely like Will_Newsome, now. I wonder if that’s a good thing.)
Start to worry if you begin to feel morally obliged to engage in activity ‘Z’ that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
Been there, done that. (Not specifically. It would be creepy if you’d gotten the specifics right.)
I blame the stroke, though.
Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.
It wasn’t quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics… hm.
I remain uncomfortable discussing the specifics in public.
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you’ve mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
Instrumental.
Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don’t know how I could begin to itemize it.
To pick one that came up recently, though, here’s a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people’s willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.
Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.
If you mean to say further that it doesn’t affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo “ally.”
People’s estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don’t engage in discussion, because I no longer trust that they will engage reliably.
Not that I can think of, but honestly this question bewilders me, so it’s possible that you’re asking about something I’m not even considering. What kind of alliances do you have in mind?
It’s not clear to me that these attributes are strongly (or even positively) correlated with willingness to “stick up” for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you’re mostly signaling that you’re not timid, with “being a good discussion partner” a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)
I didn’t have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you’re looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.
This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn’t seem to be a very accurate description of reality. A lot of information—and information I consider important at that—can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive ‘timidity’ can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you’re not timid seems to be a mistake.
In my own experience—from back when I was timid in the extreme—the sort of “sticking up for”, jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.
Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave’s model seems far more accurate and useful in this case.
I find that my brain doesn’t automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven’t found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.
I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I’d be curious to find out.
Fair enough; it may be that I overestimate the value of what I’m calling trust here.
Just for my own clarity, when you say that what I’m doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we’ve been discussing on this thread (or are they equivalent)?
I’m not especially looking to make real-life friends, though there are folks here who I wouldn’t mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.
I was talking about the abstract behavior that we were discussing.
I really like your illustration here. To the extent that this is what you were trying to convey by “3)” in your analysis of wedrifid’s style then I endorse it. I wouldn’t have used the “alliances” description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I’m happy with it as a simple model.
Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of “Sam”, “Pat” and “A”. In particular there are many behaviors “A” that the execution of will immediately place the victim of said behavior into the role of “ally that I am obliged to support”.
Yeah, agreed about the distracting phrasing. I find it’s a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.
Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.
Also, if you have an articulable model of how you make those judgments I’d be interested, especially if it uses more socially acceptable language than mine does.
Edit: Also, I’m really curious as to the reasoning of whoever downvoted that. I commit to preserving that person’s anonymity if they PM me about their reasoning.
For what it is worth, sampling over time suggests multiple people—at one point there were multiple upvotes.
I’m somewhat less curious. I just assumed it people from the ‘green’ social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.
Regardless of whether or not this is compatible with being a “complete jerk” in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one’s other goals (naturally the methods used are community-specific but that is more than good enough).
In saying this, I don’t know whether I’m expanding on your point or disagreeing with it.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I’ve seen so far (your comment, TheOtherDave’s, this comment by wedrifid) are not really forming into a coherent whole for me.
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
I appreciate your kind words komponisto! You inspire me to live up to them.
This discussion is off-topic for the “Rationality Quotes” thread, but...
If you’re interested in an easy way to gain karma, you might want to try an experimental method I’ve been kicking around:
Take an article from Wikipedia on a bias that we don’t have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer’s more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.
Maybe I’ll eventually write something like that. Not yet.
Oh, you want a Quest, not a goal. :-)
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.
I nominate this as the Less Wrong Summer Challenge, for everybody.
(One modification I’d make: it shouldn’t necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)
And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.
You just need a reasonably friendly tone. I have a bunch of karma, and I haven’t posted any articles yet (though I’m working on it).
Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a “painful and embarrasing process”, meaning that the ante and risk must be higher.
That actually sounds fun now that you put it like that!
One day I will write “How to karmawhore with LessWrong comments” if I can work out how to do it in such a way that it won’t get −5000 within an hour.
I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.
Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.
Once that’s done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a “strategy” onto a run of comments that happened to succeed.
Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)
Nitpick: cryptography solves this much more neatly.
Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone’s concerns.
Factorization doesn’t enter into it—to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.
But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).
What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.
I mean you still have to give the encrypted data to someone. They can’t tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don’t want the act of giving encrypted fore-notice to influence behavior.
Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.
Better yet… embed five different predictions in that header. When the time comes, reveal just the one that turned out most correct!
Hmm yes, there might be a hidden weakness in my master plan as far as accountability is concerned :-)
None that were not extant in the original scheme, assuming there are at least five people on LW who’d be considered trusted parties.
But of four people on LW who would be considered trusted parties, what’s the probability that all four would be quiet after the fifth is called upon to post the prediction or prediction hash?
You’re right, of course. I didn’t think that through. There haven’t been any good “gain the habit of really thinking things through” exercises for a Skill-of-the-Week post, have there?
Bear in mind that it’s often not worth the effort. I think the skill to train would be recognizing when it might be.
Besides, in the prediction-hash case, they may well not post right away.
“Recognizing when you’ve actually thought thoroughly” is the specific failure mode I’m thinking of; but that’s probably highly correlated with recognizing when to start thinking thoroughly.
I feel like such a skill may be difficult to consciously train without a tutor:
-- @afoolswisdom
Yes, the first thing I thought of was Quirrel’s hashed prediction; but it doesn’t seem that everyone’s forgotten yet, as of last month.
My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)
IME, per-comment EV is way higher in the HP:MoR discussion threads.
It so is. Karmawhoring in those is easy.
This suggests measuring posts for comment EV.
Now that is an interesting concept. I like where this subthread is going.
Interesting comparisons to other systems involving currency come to mind.
EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties… for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--
...okay, perhaps some sleep is in order first.
It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.
Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.
That’s like getting a black belt in karate by buying one from the martial arts shop. It isn’t karmawhoring unless you’re getting karma from real people who really thought your comments worth upvoting.
“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?
It is good to have one’s comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute’s reward—money—is of some actual use.
Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.
This would indeed count as “minimal contribution”, but still sounds like a lot of work...
This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.
You mean to the extent that any problem at all is a rationality problem, or something else?
It’s a bias, as far as I’m concerned, and something that needs to be overcome. People with egos can be right, but if one can’t deal with the fact that they’re either right or wrong regardless of their egotism, then one is that much slower to update.
Dealing with others’ irrationality is very much a rationality problem.
Ignore this.
It is what we would call an “instrumental rationality” problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos… which you seem to be taking steps towards now!
And I thought I was the only one getting pummeled here...
UPDATE: Lame quest was lame. I’m already back up to positive karma although I hit −100 a couple days ago.
Maybe I should try for −1000 next time, instead.
Some users don’t read the HP:MoR threads, and some users only read the HP:MoR threads. You don’t have to feel like you have a reputation here yet. Also, welcome to Less Wrong.
Has anybody ever considered moving the HP:MoR threads to another site?
There are threads on other sites (the TVTropes one is the biggest, I think, but I know the xkcd forums have a thread, and I’m sure others do as well). Part of the value of having HP:MoR threads here is it makes it likely that people who come here for the MoR threads will stay for the rest of the site- but I agree that the karma on them is atypical for karma on the site, and decoupling it would have some value (but I suspect higher costs than value).
As I mentioned elsewhere, it would have the effect of making http://lesswrong.com/r/discussion/topcomments/ more useful (for people who don’t read HP:MoR, such as me).