FWIW, this is one of my favourite articles. I can’t say how much it would help everyone—I think I read it when I was just at the right point to think about procrastination seriously. But I found the analytical breakdown into components incredibly helpful way to think about it (and I love the sniper rifle joke).
JackV
Tone arguments are not necessarily logical errors
I think people’s objections to tone arguments have often been misinterpreted because (ironically) the objections are often explained more emotively and less dispassionately.
As I understand it, the problem with “tone arguments” is NOT that they’re inherently fallacious, but rather, than they’re USUALLY (although not necessarily) rude and inflammatory.
I think a stereotypical exchange might be:
A says something inadvertently offensive to subgroup Beta B says “How dare you? Blah blah blah” A says “Don’t get so emotional! Also, what you said is wrong because p, q and r” C says “Hey, no tone arguments, please”
A is correct that B’s point might be more persuasive if it were less emotional and were well-crafted to be persuasive to people regardless whether they’re already aware of the issues or not, and often correct about p, q and r (whether they’re substansive rebuttals of the main point, or just quibbles) . But if B fails to put B’s argument in the strongest possible form , it’s A’s responsibility to evaluate the stronger form of the argument, not just critique B for not doing so. And C pointed that out, just in a way that might unfortunately be opaque to A.
I don’t know if the idea works in general, but if it works as described I think it would still be useful even if it doesn’t meet this objection. I don’t forsee any authentication system which can distinguish between “user wants money” and “user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons”, but even if it doesn’t, a password you can’t tell someone would still be more secure because:
you’re not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them
you’re not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is
I think the “stress detector” idea is one that is unlikely to work unless someone works on it specifically to tell the difference between “hurried” and “coerced”, but I don’t think the system is useless because it doesn’t solve every problem at once.
OTOH, there are downsides to being too secure: you’re less likely to be kidnapped, but it’s likely to be worse if you ARE.
The impression I’ve formed is that physicists have a pretty good idea what’s pretty reliable (the standard model) and what’s still completely speculative (string theory) but at some point the popular science pipeline communicating the difference to intelligent scientifically literate non-physicists broke down, and so I became broadly cynical about non-experimentally-verified physics in general, when if I’d had more information, I’d have been able to make much more accurate predictions about which were very likely, and which were basically just guesses.
I’d not seen Elizier’s post on “0 and 1 are not probabilities” before. It was a very interesting point. The link at the end was very amusing.
However, it seems he meant “it would be more useful to define probabilities excluding 0 and 1” (which may well be true), but phrased it as if it were a statement of fact. I think this is dangerous and almost always counterproductive—if you mean “I think you are using these words wrong” you should say that, not give the impression you mean “that statement you made with those words is false according to your interpretation of those words is false”.
I once skimmed “How to win friends and influence people”. I didn’t read enough to have a good opinion of the advice (I suspect djcb’s description of it being fairly good advice as long as the author’s experience generalises well, which HTWFAIP probably does better than many but not perfectly).
However, what had a profound influence on me was that though there’s an unfortunate stereotype of people who’ve read too much Carnegie seeming slimy and fake, the author seemed to genuinely want to like people and be nice to them, which I thought was lovely.
It seems to me that Elizier’s post was a list of things that typically seem, in the real world, to be component of people’s happiness, but are commonly missed out when people propose putative (fictional or futuristic) utopias.
It seemed to me that Elizier was saying “If you propose a utopia without any challenge, humans will not find it satisfying” not “It’s possible to artificially provide challenge in a utopia”.
Hm. Now you say it, I think I’ve definitely read some excellent non-Elizier articles on Less Wrong. But not as systematically. Are they collated together (“The further sequences”) anywhere? I mean, in some sense, “all promoted articles” is supposed to serve that function, but I’m not sure that’s the best way to start reading. And there are some good “collections of best articles”. But they don’t seem as promoted as the sequences.
If there’s not already, maybe there should be a bit of work in collecting the best articles by theme, and seeing which of them could do with some revising to make whatever the (in retrospect) best point more clear. Preferably enough bit of revising (or just disclaimers) to make it clear that they’re not the the Word of God, but not so much they become bland.
Awesome avoidance of potential disagreement in favour of cooperation for a positive-sum result :)
I agree (as a comparative outsider) that the polite response to Holden is excellent. Many (most?) communities—both online communities and real-world organisations, especially long-standing ones—are not good at it for lots of reasons, and I think the measured response of evaluating and promoting Holden’s post is exactly what LessWrong members would hope LessWrong could do, and they showed it succeeded.
I agree that this is good evidence that LessWrong isn’t just an Eliezer-cult. (The true test would be if Elizier and another long-standing poster were dismissive to the post, and then other people persuaded them otherwise. In fact, maybe people should roleplay that or something, just to avoid getting stuck in an argument-from-authority trap, but that’s a silly idea. Either way, the fact that other people spoke positively, and Elizier and other long-standing posters did too, is a good thing.)
However, I’m not sure it’s as uniquely a victory for the rationality of LessWrong as it sounds. In responose to srdiamond, Luke quoted tenlier saying “[Holden’s] critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don’t most people regard this as a great summary of their current views, rather than persuasive in any way?” To me, that suggests that Holden did a really excellent job expressing these views clearly and persuasively. However, it suggests that previous people had tried to express something similar, but it hadn’t been expressed well enough to be widely accepted, and people reading had failed to sufficiently apply the dictum of “fix your opponents’ arguments for them”. I’m not sure if that’s true (it’s certainly not automatically true), but I suspect it might be. What do people think?
If there’s any truth to it, it suggests one good answer to the recent post http://lesswrong.com/lw/btc/how_can_we_get_more_and_better_lw_contrarians (whether that was desirable in general or not) would be, as a rationalist exercise for someone familiar with/to the community and good at writing rationally, to take a survey of contrarian views on the topic that people on the community may have had but not been able to express, and don’t worry about showmanship like pretending to believe it yourself, but just say “I think what some people think is [well-expressed argument]. Do you agree that’s fair? If so, do I and other people think they have a point?” Whether or not that argument is right it’s still good to engage with it if many people are thinking it.
Yes, I’d agree. (I meant to include that in (B)). I mean, in fact, I’d say that “there are no biological differences between races other than appearance” is basically accurate, apart from a few medical things, without any need for tiptoeing around human biases. Even if the differences were a bit larger (as with gender, or even larger than that), I agree with your last parenthesis that it would probably still be a good idea to (usually) _act_as if there weren’t any.
From context, it seems “race realism” refers to the idea that there are legitimate differences between races, is that correct? However, I’m not sure if it’s supposed to refer to biological differences specifically, or any cultural differences? And it seems to be heavily loaded with connotations which I’m unaware of, that I would be hesitant to say it was “true” or “not true” even if I knew the answer to the questions in the two first sentences.
Let me try to summarise the obvious parts of the situation as I understand it. I contend that:
(A) There are some measureable differences between ethnicities that are most plausibly attributed to biological differences. (There are some famous examples, such as greater susceptibility of some people to skin cancer, or sickle cell anemia. I assume there are smaller differences elsewhere. If anyone seriously disagrees, say so.)
(B) These are massively dwarfed by the correlation of ethnicity with cultural differences in almost all cases.
(C) There is a social taboo against admitting (A)
(D) There is a large correlation between ethnicity and various cultural factors, and between cultural factors.
(E) It is sometimes possible to draw probabalistic inferences based on (D). Eg. With no other information, you may guess that someone on the street in London is more likely to be a British citizen if they are Indian than East Asian (or vice versa, whichever is true).
(F) The human brain deals very badly with probabalistic inferences. If you guess someone’s culture based on their ethnicity or dress, you are likely to maintain that view as long as possible even in the face of new information, until you suddenly flip to the opposite view. Because of this, there is (rightly IMHO) a social taboo against doing (E) even when it might make sense.
(G) People who are and/or think they are good at drawing logicial inferences a la (E) but don’t have as much personal experience fo the pitfalls described in (F) are likely to resent the social taboo described in (F) because it seems fussy and nonsensical to them. I am somewhat prone to this error (not so much with race, but with other things)
(H) The word “racist” is horrendously undefined. It is used both to mean “someone or something which treats people differently based on ‘race’, rightly or wrongly” (including examples where treating people differently is the only possible thing to do, such as preventative advice for medical conditions, or advice on how to avoid bad racism from other people) and to mean “someone or something which is morally wrong to discriminate based on race.” Thus a description of whether something is “racist” is typically counterproductive.
I admit I only skimmed the OP’s transcript, but my impression is that he fairly describes why he is frustrated that it is difficult to talk about these issues, but I am extremely leery of a lot of the examples he uses.
I was going to write more, but am not sure how to push it. How am I doing so far...? :)
I would say add [Video]: [Link] would perpetuate the misunderstanding that there may be no immediate content, [Video] correctly warns people who (for whatever reason) can’t easily view arguments in video format.
I think this is directly relevant to the idea of embracing contrarian comments.
The idea of having extra categories of voting is problematic, because it’s always easy to suggest, but only worthwhile if people will often want to distinguish them, and distinguishing them will be useful. So I think normally it’s a well-meaning but doomed suggestion, and better to stick to just one.
However, whether or not it would be a good idea to actually imlpement, I think separating “interested” and “agree” is a good way of expressing what happens to contrarian comments. I don’t have first-hand experience, but based on what I usually see happening at message boards, I suspect a common case is something like:
Someone posts a contrarian comment. Because they are not already a community stalwart, they also compose the comment in a way which is low-status within the community (eg. bits of bad reasoning, waffle, embedded in other assumptions which disagree with the community).
Thus, people choose between “there’s something interesting here” and “In general, this comment doesn’t support the norms we want this community to represent.” The latter usually wins except when the commenter happens to be popular or very articulate.
The interesting/agree distinction would be relevant in cases like this, for instance:
I’m pretty sure this is wrong, but I can’t explain why, I’d like to see someone else tackle it and agree/disagree
I think this comment is mostly sub-par, but the core idea is really, really interesting
I might click “upvote” for a comment I thought was funny, but want a greater level of agreement for a comment I specifically wanted to endorse.
There’s a possibly similar distinction between stackoverflow and stackoverflow meta, because negative votes affect user rank on overflow but not meta. On stack overflow, voting generally refers to perceived quality. On meta, it normally means agreement.
I’m not sure I’d advocate this as a good idea, but it seemed an interesting possibility given the problem proposed. FWIW, if it were implemented, it’d want a lot of scrutiny and brainstorming, but my first reaction would be to leave the voting as supposedly meaning “interesting”, and usually sort by that, but add a secondary vote meaning “agree” or “disagree” or similar terms that can add a nuance to it.
Edit: Come to think of it, a similar effect is acheived by a social convention of people upvoting the comment, but also upvoting a reply that says “this part good, this part bad”. If that happens, it should fulfil the same niche, but I don’t know if it is happening enough.
That’s an awesome comment. I’m interested which specific cues came up that you realised each other didn’t get :)
Perhaps the right level of warning is to say “Cambridge UK” in the title and first line, but not take a position on whether other people are likely to be interested or not..?
I’ve been reading the answers and trying to put words into what I want to say. Ideally people will experience not just being more specific, but experience that when they’re more specific, they immedaitely communicate more effectively.
For instance, think of three or four topics people probably have an opinion on, starting with innocuous (do you like movie X) and going on to controvertial (what do you think of abortion). Either have a list in advance, or ask people for examples. Perhaps have a shortlist and let people choose, or suggest something else if they really want?
I picked the movie example because it’s something people usually feel happy to talk about, but can be very invested in their opinion of. Ideally it’s something people will immediately disagree about. I don’t think this is difficult—in a group of 10, I’d expect to name only one or two movies before people disagreed, even though social pressure usually means they won’t immediately say so.
Step 1 Establish that people disagree, and find it hard to come to an agreement. This should take about 30s. People will hopefully “agree to disagree” but not actually understand each other’s position. Eg. “Starwars was great, it was so exciting.” “Starwars was boring and sucked and didn’t make any sense.”
Step 2 Ask WHAT people like about it. Encourage people to give specific examples at first (“eg. I loved it when Luke did X”) and then draw generalisations from it (“I really empathised with Luke and I was excited that he won” “I’ve read stories about farmboys who became heroes before, I already know what happens, bring me some intellecutal psychological fare instead”). Emphasise that everyone is on the same side, and they shouldn’t worry about being embarrassed or being “wrong”.
Step 3 Establish that (probably) they interpreted what the other person said in terms of what they were thinking (eg. “How can blowing up a spaceship be boring”) when actually the other person was thinking about something they hadn’t thought of (eg. “OK, I guess if you care about the physics, it would be annoying that they are completely and utterly made up, it just never occurred to me that anyone would worry about that.”)
I may be hoping too much, but this is definitely the sort of process I’ve gone through to rapidly reach an understanding with someone when we previously differed a lot, and for some simple examples, it doesn’t seem too much to hope we can do so that rapidly. Now, go through the process with two-four statements, ending with something fairly controvertial.
Hopefully (this is pure speculation, I’ve not tried it), giving specific examples will lead to people actually reaching understandings, imprinting the experience as a positive and successful one. Then encourage people to say “Can you give me an example of when [bad thing] would be as bad as you feel” as often as possible. Give examples where being specific is more persuasive (eg. “We value quality” vs “We aim for as few bugs as possible” vs “We triage bug reports as they come in. All bugs we decide to fix are fixed before the next version is released” or “we will close loopholes in the tax code” vs “we will remove the tax exempion on X”), and encourage people to shout out more.
For that matter, I couldn’t stop my mind throwing up objections like “Frodo buys off-the-rack clothes? From where exactly? Surely he’d have tailor made? Wouldn’t he be translated into British English as saying ‘trousers’? Hobbit feet are big and hairy for Hobbits, but how big are they compared to human feet—are their feet and inches 2⁄3 the size?”
It didn’t occur to me until I’d read past the first two paragraphs that we were even theoretically supposed to ACTUALLY guess what size Frodo would wear. And I’m still unsure if the badness of the Frodo example was supposed to be part of the joke or not—I mean, it’s fairly funny if it is, but it’s the sort of mistake (a bad example for a good point) I’d expect to see even made by intelligent, competent writers.
And I mean, I’m fairly sure that the fictional bias effect is real :)
I remember when you drew this analogy to different interpretations of QM and was thinking it over.
The way I put it to myself was that the difference between “laws of physics apply” and “everything acts AS IF the laws of physics apply, but the photon blinks out of existence” is not falsifiable, so for our current physics, the two theories are actually just different reformulations of the same theory.
However, Occam’s razor says that, of the two theories, the right one to use is “laws of physics apply” for two reasons: firstly, that it’s a lot simpler to calculate, and secondly, if we ever DO find any way of testing it, we’re 99.9% sure that we’ll discover that the theory consistent with conservation of energy will apply.
Excellent point!