Logical Rudeness
The concept of “logical rudeness” (which I’m pretty sure I first found here, HT) is one that I should write more about, one of these days. One develops a sense of the flow of discourse, the give and take of argument. It’s possible to do things that completely derail that flow of discourse without shouting or swearing. These may not be considered offenses against politeness, as our so-called “civilization” defines that term. But they are offenses against the cooperative exchange of arguments, or even the rules of engagement with the loyal opposition. They are logically rude.
Suppose, for example, that you’re defending X by appealing to Y, and when I seem to be making headway on arguing against Y, you suddenly switch (without having made any concessions) to arguing that it doesn’t matter if ~Y because Z still supports X; and when I seem to be making headway on arguing against Z, you suddenly switch to saying that it doesn’t matter if ~Z because Y still supports X. This is an example from an actual conversation, with X = “It’s okay for me to claim that I’m going to build AGI in five years yet not put any effort into Friendly AI”, Y = “All AIs are automatically ethical”, and Z = “Friendly AI is clearly too hard since SIAI hasn’t solved it yet”.
Even if you never scream or shout, this kind of behavior is rather frustrating for the one who has to talk to you. If we are ever to perform the nigh-impossible task of actually updating on the evidence, we ought to acknowledge when we take a hit; the loyal opposition has earned that much from us, surely, even if we haven’t yet conceded. If the one is reluctant to take a single hit, let them further defend the point. Swapping in a new argument? That’s frustrating. Swapping back and forth? That’s downright logically rude, even if you never raise your voice or interrupt.
The key metaphor is flow. Consider the notion of “semantic stopsigns”, words that halt thought. A stop sign is something that happens within the flow of traffic. Swapping back and forth between arguments might seem merely frustrating, or rude, if you take the arguments at face value—if you stay on the object level. If you jump back a level of abstraction and try to sense the flow of traffic, and imagine what sort of traffic signal this corresponds to… well, you wouldn’t want to run into a traffic signal like that.
Another form of argumentus interruptus is when the other suddenly weakens their claim, without acknowledging the weakening as a concession. Say, you start out by making very strong claims about a God that answers prayers; but when pressed, you retreat back to talking about an impersonal beauty of the universe, without admitting that anything’s changed. If you equivocated back and forth between the two definitions, you would be committing an outright logical fallacy—but even if you don’t do so, sticking out your neck, and then quickly withdrawing it before anyone can chop it off, is frustrating; it lures someone into writing careful refutations which you then dance back from with a smile; it is logically rude. In the traffic metaphor, it’s like offering someone a green light that turns yellow after half a second and leads into a dead end.
So, for example, I’m frustrated if I deal with someone who starts out by making vigorous, contestable, argument-worthy claims implying that the Singularity Institute’s mission is unnecessary, impossible, futile, or misguided, and then tries to dance back by saying, “But I still think that what you’re doing has a 10% chance of being necessary, which is enough to justify funding your project.” Okay, but I’m not arguing with you because I’m worried about my funding getting chopped off, I’m arguing with you because I don’t think that 10% is the right number. You said something that was worth arguing with, and then responded by disengaging when I pressed the point; and if I go on contesting the 10% figure, you are somewhat injured, and repeat that you think that what I’m doing is important. And not only is the 10% number still worth contesting, but you originally seemed to be coming on a bit more strongly than that, before you named a weaker-sounding number… It might not be an outright logical fallacy—not until you equivocate between strong claims and weak defenses in the course of the same argument—but it still feels a little frustrating over on the receiving end.
I try not to do this myself. I can’t say that arguing with me will always be an enjoyable experience, but I at least endeavor not to be logically rude to the loyal opposition. I stick my neck out so that it can be chopped off if I’m wrong, and when I stick my neck out it stays stuck out, and if I have to withdraw it I’ll do so as a visible concession. I may parry—and because I’m human, I may even parry when I shouldn’t—but I at least endeavor not to dodge. Where I plant my standard, I have sent an invitation to capture that banner; and I’ll stand by that invitation. It’s hard enough to count up the balance of arguments without adding fancy dance footwork on top of that.
An awful lot of how people fail at changing their mind seems to have something to do with changing the subject. It might be difficult to point to an outright logical fallacy, but if we have community standards on logical rudeness, we may be able to organize our cognitive traffic a bit less frustratingly.
Added: Checking my notes reminds me to include offering a non-true rejection as a form of logical rudeness. This is where you offer up a reason that isn’t really your most important reason, so that, if it’s defeated, you’ll just switch to something else (which still won’t be your most important reason). This is a distinct form of failure from switching Y->Z->Y, but it’s also frustrating to deal with; not a logical fallacy outright, but a form of logical rudeness. If someone else is going to the trouble to argue with you, then you should offer up your most important reason for rejection first—something that will make a serious dent in your rejection, if cast down—so that they aren’t wasting their time.
- Yes, a blog. by 19 Nov 2010 1:53 UTC; 151 points) (
- How Specificity Works by 3 Sep 2019 12:11 UTC; 89 points) (
- You’re Entitled to Arguments, But Not (That Particular) Proof by 15 Feb 2010 7:58 UTC; 88 points) (
- Mob and Bailey by 25 May 2023 22:14 UTC; 78 points) (
- Conversation Halters by 20 Feb 2010 15:00 UTC; 77 points) (
- The person-affecting value of existential risk reduction by 13 Apr 2018 1:44 UTC; 64 points) (EA Forum;
- Methods of Phenomenology by 30 Dec 2017 18:42 UTC; 19 points) (
- 10 May 2019 7:00 UTC; 19 points) 's comment on How To Use Bureaucracies by (
- 18 Sep 2023 1:37 UTC; 18 points) 's comment on Actually, “personal attacks after object-level arguments” is a pretty good rule of epistemic conduct by (
- 24 Nov 2012 10:11 UTC; 13 points) 's comment on LW Women- Minimizing the Inferential Distance by (
- 6 Oct 2011 7:58 UTC; 13 points) 's comment on . by (
- “Life Experience” as a Conversation-Halter by 18 Mar 2010 19:39 UTC; 10 points) (
- 11 Dec 2010 18:39 UTC; 9 points) 's comment on Best career models for doing research? by (
- 22 Oct 2012 22:04 UTC; 9 points) 's comment on Babies and Bunnies: A Caution About Evo-Psych by (
- Method of statements: an alternative to taboo by 16 Nov 2022 10:57 UTC; 7 points) (
- 9 Feb 2011 7:11 UTC; 7 points) 's comment on Why is reddit so negative? by (
- 23 Jul 2010 22:04 UTC; 7 points) 's comment on Against the standard narrative of human sexual evolution by (
- 21 Feb 2010 12:37 UTC; 7 points) 's comment on You’re Entitled to Arguments, But Not (That Particular) Proof by (
- 4 Jan 2012 8:43 UTC; 6 points) 's comment on Utilitarians probably wasting time on recreation by (
- 10 Oct 2013 19:48 UTC; 6 points) 's comment on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode by (
- 2 Nov 2011 1:06 UTC; 6 points) 's comment on Amanda Knox: post mortem by (
- 30 Jan 2010 2:42 UTC; 6 points) 's comment on Welcome to Heaven by (
- 21 Aug 2013 17:05 UTC; 6 points) 's comment on Why Are Individual IQ Differences OK? by (
- 25 Jul 2013 20:53 UTC; 5 points) 's comment on Why Eat Less Meat? by (
- 5 Feb 2010 13:14 UTC; 5 points) 's comment on Normal Cryonics by (
- 4 Jun 2012 18:59 UTC; 5 points) 's comment on “Progress” by (
- 15 Aug 2010 10:08 UTC; 4 points) 's comment on Should I believe what the SIAI claims? by (
- 21 Jul 2016 6:06 UTC; 4 points) 's comment on Zombies Redacted by (
- 27 Oct 2011 14:46 UTC; 4 points) 's comment on Amanda Knox: post mortem by (
- 28 Oct 2011 20:53 UTC; 4 points) 's comment on Rationality Quotes October 2011 by (
- 8 Apr 2011 15:03 UTC; 4 points) 's comment on A Sense That More Is Possible by (
- 26 Apr 2012 16:37 UTC; 4 points) 's comment on A sense of logic by (
- 22 Nov 2014 4:14 UTC; 3 points) 's comment on Conceptual Analysis and Moral Theory by (
- 19 Nov 2010 20:40 UTC; 3 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- 21 Oct 2011 4:11 UTC; 3 points) 's comment on Amanda Knox: post mortem by (
- 11 Nov 2011 14:36 UTC; 3 points) 's comment on Amanda Knox: post mortem by (
- 30 Jan 2010 17:57 UTC; 3 points) 's comment on That Magical Click by (
- 25 Nov 2010 23:17 UTC; 3 points) 's comment on Rationality is Not an Attractive Tribe by (
- 21 Feb 2010 14:17 UTC; 3 points) 's comment on Conversation Halters by (
- 11 Jun 2010 17:21 UTC; 3 points) 's comment on Attention Lurkers: Please say hi by (
- 3 Nov 2012 2:25 UTC; 2 points) 's comment on Looking for alteration suggestions for the official Sequences ebook by (
- 18 Mar 2010 19:31 UTC; 2 points) 's comment on Open Thread: January 2010 by (
- 2 Feb 2010 17:34 UTC; 2 points) 's comment on Debunking komponisto on Amanda Knox (long) by (
- 3 Feb 2010 1:03 UTC; 2 points) 's comment on Debunking komponisto on Amanda Knox (long) by (
- 21 Jan 2011 1:31 UTC; 2 points) 's comment on Theists are wrong; is theism? by (
- 15 Feb 2015 19:02 UTC; 2 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 13 Nov 2020 15:05 UTC; 1 point) 's comment on Pontor’s Shortform by (
- 29 Jan 2010 17:54 UTC; 1 point) 's comment on That Magical Click by (
- 26 Mar 2010 16:41 UTC; 1 point) 's comment on The two insights of materialism by (
- 29 Nov 2012 15:23 UTC; 1 point) 's comment on LW Women- Minimizing the Inferential Distance by (
- 9 Dec 2011 4:38 UTC; 1 point) 's comment on Rationality Quotes December 2011 by (
- 16 Jul 2018 11:04 UTC; 1 point) 's comment on Sleeping Beauty Resolved? by (
- 25 Nov 2010 23:07 UTC; 1 point) 's comment on Rationality is Not an Attractive Tribe by (
- 19 Jun 2022 15:42 UTC; 0 points) 's comment on On Deference and Yudkowsky’s AI Risk Estimates by (EA Forum;
- 4 Nov 2012 18:42 UTC; 0 points) 's comment on The Emergence of Math by (
- 21 Nov 2011 17:25 UTC; 0 points) 's comment on Don’t Apply the Principle of Charity to Yourself by (
- 7 Dec 2010 23:32 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 24 Apr 2017 0:11 UTC; 0 points) 's comment on Cheating Omega by (
- 13 Mar 2011 4:51 UTC; 0 points) 's comment on The Limits of Curiosity by (
- 26 Jun 2017 10:10 UTC; 0 points) 's comment on Open thread, June. 19 - June. 25, 2017 by (
- 13 Dec 2010 18:03 UTC; 0 points) 's comment on A Thought on Pascal’s Mugging by (
- 1 May 2011 16:42 UTC; -1 points) 's comment on [Altruist Support] How to determine your utility function by (
- 29 Oct 2011 20:05 UTC; -1 points) 's comment on Morality is not about willpower by (
- 14 Aug 2013 16:19 UTC; -1 points) 's comment on Engaging Intellectual Elites at Less Wrong by (
- 23 Jul 2012 12:57 UTC; -1 points) 's comment on Work on Security Instead of Friendliness? by (
- 10 Sep 2011 22:52 UTC; -2 points) 's comment on An attempt to ‘explain away’ virtue ethics by (
- How To Construct a Political Ideology by 21 Jul 2013 15:00 UTC; -4 points) (
- 8 Feb 2014 3:52 UTC; -5 points) 's comment on True numbers and fake numbers by (
Basically it comes down to a measure of the degree to which the other person cares about what you are saying. What Eliezer puts as “sticking his neck out”, I would describe more specifically as “listening carefully to the other person”. In this way I would connect ‘logical rudeness’ with plain old manners.
To put it another way, while the person is talking, are you thinking about what they are saying, or preparing your response? I try to be generous in this way, and most of the people in my life respond well to it. But then I’m choosy about who I spend time with.
It works best with my wife. We’ve been communicating this way for years and years now, and it’s just a wonderful experience to have a conversation in which both people are giving the other exclusive attention.
The other thing my wife and I do really well is give each other space to think. When we’re done talking we stop talking and wait for the other person to have their say. Since she was paying careful attention while I was talking, she might not have something to say right away. So we have to give each other that time. Not many people are comfortable with silence.
In the old days we used to use ice cream as an inverse semaphore; the listener held the pint and the spoon, and ate and listened while the talker talked. Then the talker took the ice cream and had to shut up until the other person asked for it.
This is awesome. I would be tempted to shut up earlier than I usually would just for the reward of getting some ice cream. :)
So the only thing we need to improve online discourse is a way to instantly deliver ice cream over the internet...
Though I appreciate the fun, you are forgetting that this is a solution to a problem that lies in old-fashioned rudeness of interrupting one another, something quite impossible on a turn-based medium as this.
On a different note, some people may be distracted too much by the ice cream, and the goal of making them listen might be forgone because of this.
I fear this is something we’ll have to live with. I’ve won many, many arguments by whittling down the opponent’s position until there is nothing substantive left of it. At this point, the only thing I can do that will mess everything up is… to press them on this, and force them to acknowledge their ‘defeat’. Because defeat is how they will perceive it, and will fight back ferociously. You might ‘break’ some of them into a completely new way of thinking, but most likely you will simply undo all your hard work up till then.
Much better to just let them leave with their dignity intact, and with hopefully a better understanding that will precolate through their worldviews and come out a few weeks later in their own words. Think of it as… leaving them a line of retreat.
The trouble is, if people don’t experience the feeling of defeat, they don’t tend to undergo proper relinquishment, and will revert back to the indefensible stronger position in time.
This is only an argument for pressing for defeat where it might actually work.
I’ve found that they will revert a bit—but not as far as back to the original position, unless there are social attractors pulling them there (political parties or religion). Over time, their position does shift, if similar themes are argued several times. And once or twice, I’ve seen people ernestly arguing to me the exact position I was trying to convince them of two months before...
This seems to be a lack of capacity to ‘update’ in general.
Even without an argument taking place, I’ve often seen this happen: I explain something to someone, he/she seems to agree and think it makes sense, and then they turn around and do or say something that contradicts their supposedly new belief, reverting back to their old position without even realizing it.
Isn’t that just evidence that they never really got it in the first place? It didn’t look like they ‘clicked’ before, did it?
That’s very possible, and we’d have to look at it in a case by case basis to see if they got it but somehow kept that new knowledge/belief compartmentalized, or just pretended to get it when they actually didn’t.
But depending how you define ‘clicking’, it probably includes ‘updating’, so we might be describing something similar using different terminology.
It sounds me like that is in direct contradiction to Stuart’s statement. (I think he’s right and you’re wrong.) Do you agree that you were simply contradicting him, or were you making some kind of subtle middle ground that I’m not seeing?
I think that both behaviours are sometimes observed, and which is most likely depends on the kind of belief. Some beliefs you can drift away from as he observes, others you have to make a break with.
Thanks for your observation!
I don’t consider it my responsibility to update their beliefs, even if that person is wrecking havoc with their actions. If they are acting irrationally, their power should be stripped from them and the environment updated to account for their stupidity. But I don’t think making them feel defeat is going to make an irrational person suddenly rational.
As Stuart said,
Making them feel it is ridicule and embarrassment. Most of the people I know will act more irrationally in this situation, not less.
I don’t think responsibility is a good way to think about it. There are several reasons to wish those you have opportunity to argue with have more accurate beliefs: one is that it’s likely to serve whatever it is that you believe in and are arguing about, and a second is that they’re more likely to help you reach accurate beliefs in future.
I agree. I am not sure that making them feel defeat is the best way to get to those ends, however. I suppose it will vary from person to person.
Of note, I am very much focusing on the word “feel” throughout this discussion.
This seems to be a variation on my pet peeve of people simply ignoring their opponents’ arguments and walking away (virtually) from a debate. I guess you’re seeing this version more often because of your higher status, and/or because you debate people more in real life, so your opponents can’t afford to just act as if they didn’t hear what you said, or as if your arguments aren’t worth responding to.
I sometimes don’t reply to counters to my arguments because I genuinely think they’re good and I’m not sure what to say next. Empty comments are discouraged here, and it feels like saying “you could be right, I need to think about this more” would be one such.
If the LessWrong codebase were easier to hack on (it’s that PostgreSQL-related bug that stops me from doing so) I’d add a facility so that comments could have a little sidebar that says “ciphergoth and Wei_Dai liked this”.
That it most certainly isn’t! It indicates Progress. There is nothing the least bit empty about it!
Yes, it’s right up there with asking questions about the argument that you are uncertain about.
An aside; how often do you ask people to be quiet for a second so you can think about what they said? How many people are comfortable giving you that space?
It often happens to me that someone sees me stopped and staring into space thinking as a result of what they say, and conclude that what they said was a really strong argument for their position, where what’s actually happening is that they’ve revealed such a depth of confusion that I’m briefly lost looking where to start unpicking it.
If social context permits it, I often ask for detailed clarification of what people say, to unearth a suspected deeper disagreement, unexpected close agreement, or possible interesting idea. This may draw attention to a minor detail in a completely unrelated conversation and temporarily shift the discussion to that detail in isolation of the prior conversation. It’s sometimes hard to convince the interlocutor that the context of the prior discussion is really irrelevant for the new discussion, that it’s not acting as an argument for the chosen side of the debate. I would be shocked to find anyone I know in person to act in this manner.
My technique is get time is to say “wait” about ten times or until they stop and give me time to think. This probably won’t work for comment threads very well, but in reality not letting the person continue generally works. Probably slightly rude, but more honest and likely less logically rude, a trade-off I can often live with.
If it matters to anyone, I have a policy of upvoting all mind-changes.
I would really REALLY like to see more people editing their top comment in a thread to indicate their mind-change AND what in particular made them update.
I disagree.
EDIT: See below for where I changed my mind.
Well I upvoted this and your mind change. Given how far you been buried, I’m even less worried about potential abuses than before :-D
With what, exactly?
You’ve convinced me!
You’re missing the step where you edit the earlier comment to reflect that.
If you need some help, reach out to me or wmoore. peerinfinity also got it running locally and submitted some code changes.
Thanks! Any advice on the easiest way to install the right version of PostgreSQL on an Ubuntu Karmic system? That’s what stopped me last time.
Hi ciphergoth, I’ve made some changes this morning that make the Less Wrong code base compatible with PostgreSQL 8.3, which is available in the karmic repositories. The change is on the ‘postgres-8.3’ branch in git. I have run through and revised the Hacking on Less Wrong wiki page this morning on an Ubuntu 9.10 install and have confirmed that it works. It would be great if you could have another go at getting the code up and running and let me know how you go.
Wow, thanks! OK, will try tomorrow evening. Great work!
To get the LW server code running on my Ubuntu box, I just followed the instructions at http://wiki.github.com/tricycle/lesswrong/hacking-on-less-wrong
There were a few details that I needed to ask matt and wmoore for help with, but the instructions on that page were enough to get it basically up and running.
Please feel free to contact me for more information, my Skype ID is PeerInfinity.
I’ve just cleared a backlog of my belated replies. The reasons for not replying immediately:
It was possible that a proper reply would require extra effort, and it’s wasn’t obvious how to justify not replying at length (uncertainty about the right way of replying, costly wrong decision).
It isn’t clear to me that I’m arguing in good faith, and hence should continue to do so, but it isn’t clear that I don’t, so I can’t justify not arguing either (uncertainty about the right way of replying again).
A proper reply would take a lot of effort (even now I’ve taken a short cut).
Given the machinery we use for them is geared more for winning fights than for generating correct beliefs, it’s frankly amazing that arguments manage to change any minds at all.
I wonder how effective strategies for careful discourse without any obvious conflict—where essentially the other party doesn’t recognize that an argument is taking place—might be.
The more private a debate, the more likely people will be generous enough for this to happen; the more public, the more hostile they will be. Hostility is a status-grab, and people in arguments (including this forum) reward it if they think the grabber deserving. Similarly, generosity is low-status, and people who are generous in public debates have very little to gain. Publicly failing in the quantity necessary to maximize your learning growth is very low-status and not many people have the stomach for it.
EDIT: Being low-status also makes it much easier for people to stop responding to your arguments, as “That’s not worthy of a response” is much more believable from the higher-status arguer when the status difference is high.
Wait, what?
When the pecking order is well-defined, we like to see it, but in a neck-and-neck competition, generosity is interpreted as deferral.
Try privately arguing with a holocaust denier or a moon hoaxer. The ones I argued with seem to be more arrogant and more hostile the more they knew that they no third party is observing the “argument”
This is a great point, but maybe I’m just saying that because it’s the exception that proves the rule. Just by arguing with someone with a fringe viewpoint, you’ve granted them roughly equal status, so they will be highly hostile as a status grab. However, in a group, these fringe viewpoints have a history of rewarding their advocate with exile—so the advocates will make a show of giving away status in that circumstance.
Compare this vs. mainstream, acceptable views—say, conservative vs. liberal in private vs. public. It’s much easier to have a productive conversation in private about these views than in public.
“the exception that proves the rule” seems like a very un-Bayesian thing to say. The implication is that both X and ~X provide evidence for the hypothesis. (Not that I always communicate my actual and complete hypothesis—sometimes that is a distraction from my main point.)
I think the implication is not that both X and ~X provide evidence for the hypothesis, but rather something like, “yes, there are a few exceptions to the rule, but if you look at what the exceptions are they’re so unusual that they just underline the fact that the rule is generally (though not universally) applicable.”
That’s an interesting idea, but I’m not sure how to go about acting on it.
Would you just pretend to have the same initial beliefs and then ‘discuss’ counterarguments that you ‘just thought of’?
That method wouldn’t work if the person already knows your position, but it might work quite well if the person isn’t even aware of your position, much less that you hold it.
I’ve done this occasionally, actually.
One time, I found a very opinionated guy with a high opinion of himself—I think he might have been Objectivist, but he scoffed at literally every philosophy he mentioned so it’s hard to tell. I figured that trying to debate him would just end the conversation early; he’s the type to quickly classify those who disagree as sheeple. So, I copied his conversational style a bit, agreed with him on most points but disagreed enough to keep the conversation interesting and get some insight into his views. I don’t think I was directly dishonest about my opinions; I just positioned myself as an ally (in an “Us vs. Them” sense) and worked from there.
I recommend this to anyone who wants to understand the reasoning of, say, creationists, but can’t talk to them without reaching an impasse of rationality vs. dogma.
Giving someone status on a non-content channel will let you get away with murder on the content channel.
Giving someone status now may some times be exchanged for them giving up some equal bit of status later
Thats likely so much more valuable then we all know or care to admit would you do us a favor and give SIAI some spare cash when you have the chance it would drive your point home and be an education to us all :)
A good way to begin an argument is by asking questions about the other person’s position, to get it nailed down.
I love this technique. It’s fun to use on missionaries—I got a couple Mormons a while ago and was able to chatter excitedly about how I’d got this tenet and that article down because of other correspondences, but now they were here, and I’d heard missionaries had special knowledge of these things, and maybe they could clear up one or two points of confusion? It turns out that the best way to get rid of missionaries is to be sincerely sorry to see them go.
This is true, not only is it practical but it also makes a good rhetorical hammer, for example I once started an argument with a truther friend asking him what exactly he believed, “for instance, do you believe all the Jews were evacuated before the planes hit?”. Forcing someone defending an irrational belief to first disassociate himself from all the really nutty stuff hanging on to his position works wonders.
I should probably remember to do more Socratic debating in friendly debates with incoming novices—never make a statement yourself if you can ask a question that will get the other person to make it.
I often think this, but find it very difficult in practice. People don’t respond to your questions the way you want; they find a way to hook off them to noodle on what’s on their mind rather than really trying to engage with them.
I agree. Actually, questions are useful at any time. When it seems like the person is changing their position without acknowledging it, I usually ask something like this:
“I’m a little confused. Before, you seemed to be saying X. Now, you seem to be saying Y. Which is it?”
If the person evades my question, that’s the end of the debate as far as I am concerned. I have my own rules of debate, and one rule is that my opponent must answer reasonable questions so that I may understand his or her position.
You’ve got to be careful though. Some people, i.e. many creationists, will just take that as an invitation to ramble ad infinitum.
Than it isn’t an argument and you don’t have anything to lose. If someone is being preachy and are completely uninterested in your positions just let them talk. Keep asking questions and a surprising thing happens: Everyone else in the room will suddenly realize the person talking is an idiot.
Many years ago, before the web, when email lists and bulletin boards were the cutting edge of the Internet and dinosaurs ruled the earth...
I was on a certain mailing list, and there was one member who, on being pressed to admit having been wrong about something or other, asserted that it was a deliberate policy of his to never do any such thing. That did not mean that he never changed his mind as a result of argument. But (he said) if he did, he would simply cease to assert the view he now thought was mistaken, and after some suitable lapse of time, advocate the view he now thought was correct, as if it had been his view all along.
I am undecided about whether this counts as logical rudeness.
Worded differently, it makes more sense: “I have to process this information and will get back to you with updated beliefs.”
My line of admitting defeat essentially sounds like, “Huh, interesting.”
He would never say anything like that, though. So he would update on new evidence, but he wouldn’t give anyone the satisfaction of crowing over it—which crowing, I would say, is another type of logical rudeness.
People on that mailing list were greatly offended by his declaration, but he simply ignored their protestations.
In some cultures, like that of my mother’s, it is extremely rude to press a person to capitulation. It is expected that people should parry in such a way that neither person loses face. In such contexts, talking in circles, softening the argument and changing the line of the argument—by either party—can be signs that one person has already conceded. It’s not only polite to save the face of the person ‘losing’ the argument, it is polite to spare the ‘winner’ from the embarrassment of causing any loss of face. To the extent that if someone ever abruptly concedes an argument in a face-to-face encounter, I assume that they belong to this culture, and I will rewind the argument to see how I offended them—usually by pressing my argument too hard or too directly.
My father, on the other hand, thought that a touch-down dance must be done on the corpse of every argument, to make sure that it is never resurrected. To not do so would weaken the argument. And I think this is a common American view—that if you are difficult to throw down and hold down, then your opponent’s argument needs to be stronger.
The member of your e-mail list had a third view, which I think is defensible in its contrast to these two extremes.
I think “American” is too general in this context. My home state is Minnesota and the culture there is very passive aggressive. There is a small subset of people who act like your father and are very active aggressive; the majority will bend over backwards to say one thing while meaning another. Meta-communication is huge in this context. If you suddenly switch roles from being passive aggressive to active aggressive the entire community will beat the hell out of you.
“Minnesota nice” is always said with an inside smirk because we know what is happening behind the smile.
I now live in Texas which is a completely different form of “nice.” The people here are more willing to give up a conversation if it will end in someone getting hurt. The behavior of “nice” is expected because they expect people to be nice. Minnesota expects the behavior even though they aren’t actually that nice.
Of course, your mileage may vary.
You’re absolutely right. I only risked this generalization because it seemed to match various American stereotypes enough to help people identify the behavior, without much risk of causing offense because “American” doesn’t actually mean anything. Narrower labels are more misleading, which is why I won’t share here the cultural group of my mother.
Interesting. Can you say more about how your mother’s culture’s way of handling conflict affects its members’ rationality, in comparison to your father’s?
Not really. Just going by the model, I would predict that if my dad was irrational, it would be because of a refusal to update beliefs, and if my mom was irrational, it would be because of not clearly defining her position.
However, my dad likes arguing and changing his mind, and I can’t infer from my mother’s equanimity that her own beliefs aren’t specific.
I can predict that if I asked them, they’d agree that updating beliefs is a private matter, independent of the social details (?) of an argument.
On the way home today (driving = meditation) , I realized that if I wasn’t making any headway comparing and contrasting my parent’s rationality—all I came up with were paradoxes and conundrums, enough for a small novel—it was because they are both exceptionally rational. However, their extended families are caricatures of irrationality.
My mother’s family succumb to magical thinking—oh wow, they do. My grandmother is afraid of bridges AND cars, and whenever she drives over a bridge (being driven, she can’t drive) she says a prayer so that everyone’s souls will stay in the car and not go under the bridge. My father’s family are Republicans and never notice how conveniently all facts about the world fall straight down party lines. (“Well, of course, one side can THINK and the others are morons.”)
Rationality-wise, from this single case study of my families, I’d say one family being argumentative and competitive about beliefs led to good [instrumental] rationality and closed-mindedness, and one family being confrontation-avoidant led to poor [instrumental] rationality and open-mindedness. I would never claim such a thing in general and would be curious about other data points.
Real open-mindedness or just verbal pleasantry? Can you give a concrete example of them acting on a new idea they were open to?
… thinking about it further, I’ve decided I don’t know them well enough. I haven’t spent that much time with them.
I have an increasingly uneasy feeling about the possible value of reducing 30 people to 4 hand waving generalizations. I don’t understand why I can’t anticipate the anxiety until after I post the comment.
The crowing seems unhealthy, for both parties.
Also, I find that the correlation between people admitting defeat in an argument and actually changing minds is so low that having everyone behave like this person might not destroy any information about what arguments work. But it seems useful to do a post-mortem at a later, calmer date. When he comes back with a new belief, it would be useful to know (1) that he changed it and (2) why.
Really? This very day, someone said FAIL to me, I admitted it, and I’m still alive and healthy. I’ve already lived through an arguer’s worst nightmare, and it doesn’t seem to have harmed me much.
Other people are different.
Speaking for myself, I find it very unpleasant to be on the receiving end of crowing. Hence I have a much easier time admitting mistakes to people I particularly trust not to crow. (One of the nice things about LW is that there isn’t much crowing here, which makes mind-changing and fessing up easier. I’d definitely like to keep it that way.)
There’s a really easy trick for conceding a debate without being crowed at. The trick is to admit that you were wrong, concisely explain the change you’ve made to your beliefs, and warmly thank the other person for taking the time to help you become correct about this. Even if they were a bit of a dick in the debate itself. Don’t declare defeat; declare a mutual victory of truth.
Anybody who can crow about your defeat after that is a huge asshole, and furthermore this should be obvious to anybody watching.
(This trick also makes you feel better about changing your mind, because you’ve reclassified it as a victory. I’ve had a much easier time conceding debates ever since I adopted this habit and mindset.)
I wasn’t born different, AFAIK. I’ve been training myself.
Though you know, even in the absence of training, I think a considerable majority of people wouldn’t actually drop dead.
Although according to Robin Hansen, low status significantly reduces life expectancy, so presumably every time someone crows, your life expectancy goes down by some amount.
On the other hand, maybe your IQ goes up a few points.
http://www.overcomingbias.com/2007/12/heroic-job-bias.html
does not sound like a healthy response to me. (I’m not disputing the truth of the claim.)
This does seem like a healthy response to me.
Neener neener neener, Eliezer was wroooong!
(the wroooong has to stretch across two syllables to fit the meter)
Now all we need is a fanperson to compile every single time Eliezer has been wrong on the internet. :D
I think this is a common policy, although it’s rare that someone would admit to following it.
Apologies if this is injecting too much mind-killing, but I really started taking notice of this type argument-gymnastics last year about the “Cash for Clunkers” program.
“This program is great! It will get money to the struggling auto-makers.”
“Wouldn’t it be more efficient to just give them money like we did before? And what if it just goes to the strong auto-makers?”
“Well, maybe. But think about the environmental benefit of all those old cars off the road!”
“Wouldn’t it be more efficient to just spend the money on the environment directly? And isn’t manufacturing a bunch of new cars bad for the environment?”
“Well yeah, but it will get money to the struggling auto-makers!”
I don’t know anything about the Clunkers program, but this doesn’t sound completely irrational to me. If X does both A and B at 60% efficiency and Y will do either A or B at 100% efficiency, which is better? (These numbers are just examples.)
This behavior seems different than the example from the OP which seems to be more like:
X is true because of A!
A is impossible
X is true because of B!
B is impossible
X is true because of A!
*facepalm*
Fair point. This is more like “Program X does A, but really inefficiently.” “True, but it also does B!” “OK, but it also does B really inefficiently.” “True, but it also does A!”
See also Religion’s rules of inference.
I’m not saying anything about the actual program or results, but that form of argument might be valid in this case. In a ‘kill two birds with one stone’ kind of way.
Looking at the efficiency of any one thing may not be the best strategy if you care about lots of things.
Sure, but (again in ignorance of the actual program) there should be at least one point on which you’re prepared to defend its efficacy.
Really, we should be trying to look at the total effects of any given expenditure. (including where we get the money from in the first place, if that’s variable)
But to simplify:
If spending $100 in a certain way benefits 10 parties as much as giving them $20 would, each could argue that it would be more efficient (by a factor of 5!!) to just give them the $100. But if you care roughly equally for all the parties, it would really be only half as good.
The only such defence worthy of our attention is one where the speaker is prepared to explicitly state a guess at the dollar value of each advantage and show that the sum is greater than the cost spent.
Or where the sum advantage is obvious compared to the next best alternative, without formally computing expected value.
Agreed
Yeah, exactly. The problem with the outline above is that in step 2 they acknowledge that the program doesn’t do A well. But then in step 4 they act like they never conceded that.
IIRC, John Searle uses a subtle form of this in his rebuttal to rebuttals to his Chinese Room argument. He separates the attacks into different cases; then he uses one set of assumptions and definitions to rebut one case; then switches (without pointing it out) to a different set of assumptions and definitions to rebut another case. Neither set of assumptions and definitions is sufficient to rebut both cases.
This can be valid if the assumptions are brought in by the rebuttals he’s defending against, and those rebuttals make contradictory assumptions.
Do the sets of assumptions and definitions contradict each other, or can they all be seen as subsets of a single set of definitions and assumptions? If they contradict each other then pointing that out should be an effective argument in most of the contexts where Searle writes.
I’m referring to Searle’s responses to the responses to his article that were originally published in Brain and Behavioral Sciences 3, 1980. I studied them 20 years ago, but it would take me a long time to go back and re-analyze it to answer your question.
I wondered what you meant by different sets. I think that this answers my question (they might have been consistent). Thanks.
I was in a discussion with a man at a yoga studio after he overheard me telling someone that Bikram’s Yoga was “kind of cultish”.
He proceeded to make one illogical argument after another, and then smiling and laughing confidently as though in triumph after each one. When I shot down what he said or explained how he misunderstood a comparison, he would not acknowledge or think about what I said, but simply move on to the next “winning comment” that he wanted to say. He thought he was superior for having made his points even though my replies reduced them to nothing.
I think this qualifies as logical rudeness, though I didn’t have a word for it at the time. I really don’t understand people like this. I think it’s kind of a contest for him, to just try and look confident while making the barest amount of sense to make a passable argument. I think it’s the jock equivalent of scoring a goal or beating someone up, as opposed to a discussion that actually is supposed to improve the total knowledge of both parties.
This happens so often that it’s probably best to call people on it. Politely, of course. Ask them ever so sweetly if they agree with your refutation of what they just said, and do not let them continue on to the next argument until they at least acknowledge that you’ve said something.
This way you can stop bad reasoning and make them feel bad for it, instead of getting pissed at you. Hopefully.
Unfortunately it is usually rude not to allow people their logical rudeness and from what I can see it is expected that talent with this footwork, as you describe it, will be deferred to with respect.
For my part I tend to cut off engagement with those who persist with such logical rudeness but that option would be less appealing if the subject was something that really mattered to me. For example my life’s mission and the very future of the universe. Sometimes bullshit just needs to be called, even if it breaks the flow and the illusion of good faith.
This is exactly what Richard Dawkins and Sam Harris are trying to promote:
That people’s logical rudeness should not be excused because it is based upon something Sacred. It is these Sacred Beliefs that perpetuate the Logical Rudeness that Eliezer has defined in his post, and these beliefs eventually need to be forcefully examined and probably opposed!
Absolutely, and I’m concurring.
I like your reply because to me it comically resembles the Sin of underconfidence. You are being cautious about what you say about Sacred Beliefs, so you say they eventually need to be forcefully examined and probably opposed.
Well I think we need to just call a spade a spade. Religious beliefs are obviously whacky, and they should be opposed now. There is no way all of society is going to have a civilized debate about religion, so we just need to start forcefully objecting to sacred cows and any protection that people’s unfounded beliefs have in “polite conversation”.
I said eventually, because not every belief is going to come up at once, and I said probably because not every belief in the set of religious beliefs is toxic or wrong.
The belief may be true in some sense, but its underlying reasoning will need to be adjusted.
You are correct though that the basis of most (almost all) religious belief is outright crazy and needs to be opposed as soon as it is encountered.
As an example of a belief I am describing:
The only thing wrong with this belief is that the set of things in the second category is empty, and the set of things in the first category needs to be adjusted based upon that second category being the empty set.
Thus, this is an accurate belief. Vacuous for the most part, but accurate. It should be amended to just
(Edit: What Eliezer is describing in his post on the Sin of Underconfidence is a phenomenon called the Dunning-Krueger Effect)
Is it really perceived as calling bullshit if you just say, “Wait, before moving on to your second point, let’s finish up debating this first point”?
If only that worked reliably!
Sometimes it’s like shouting at the tides; a determined opponent will just make a more forceful push for derailment next time.
The Logical Rudeness (and a little bit of Plain Rudeness — generally a somewhat angry and mocking tone) were strong in someone I was recently debating about the desirability of indefinitely long lifespans.
They make an argument. I offer a counterargument. This may go back and forth a few times, but in the end, they would usually then switch to another argument without acknowledging my last counterargument at all. And then, later, they’d often switch back to the same point they made before and refused to acknowledge my counterargument to it, as though I had never said it. Very frustrating.
(This is why I’m not interested in going into politics anymore. This is the structure of pretty much every political debate, and I have a very low tolerance for it.)
I think I’m going to start asking people to accept this precondition if they want to argue with me: When one of us makes a point, the other must offer a counterargument or explicitly concede the point. We’re not allowed to move to another point without doing that first. Concede or refute, don’t ignore. And if one of us later reuses an argument we previously conceded, the other person gets to dismiss it without repeating their refutation.
I was thinking of adding “withdraw” as an option (Abort/retry/fail? Concede/refute/withdraw?), which would be like pleading no contest in a trial: it would say “I don’t necessarily accept your argument, but I won’t contest it for now”. You’d be stating your intention to act as though you had conceded it, with the caveat that you still don’t believe it’s correct. I can see some advantages of this — it might be appropriate in cases where a point is relatively minor to the subject of the debate, when it’s not worth getting into something too deeply if there isn’t already agreement — but on the other hand, we probably shouldn’t have a norm that allows people to get out of changing their minds too easily. Any thoughts on this? Perhaps the standard should just be that if you don’t expect you’ll care to continue supporting a given argument after you’ve made it and heard possible counterarguments, you shouldn’t use it in the first place.
Tapping out
It sounds reasonable to me...
...but a problem has just occurred to me: what if one debater is convincingly correct, but the other persists in invalid refutations? The third option might be less “nolo condere” than “I rest my case”.
To be clear, I meant “withdraw” as “I withdraw this particular argument”, not “I withdraw from the debate”. It sounds like you’re talking more about the latter. But that might be more useful anyway, now that I think about it.
Of course, in situations like that (and in debates in general), it might be helpful to have some other people observing it so there can be an outside reference for what’s “convincing”.
This might work as an explicit standard for argument here.
No. This is still a blog, not a vocation. If I fail to respond to your blog comment, that means that I didn’t happen to read that comment. It does not tell you anything about whether or not you were right. So it is not a valid argument, much less a trump card in all future discussions.
This isn’t a rule about being required to reply. It’s a rule about not offering new arguments until old arguments have been accepted or refuted.
I only meant the rule to apply to interactions—A offers argument A1, B (who’s discussing the matter with A) must address A! before moving on to B1. C (who hasn’t said anything so far) is under no obligation.
If B didn’t see A1 or doesn’t remember it, then B should be politely reminded of it. If B then persists in offering B1, then the rule gets invoked.
I accept your precondition for engagement.
Perhaps we can start be debating the desirability of indefinitely long lifespans. If you want to broadly summarize your friend’s position on the Open Thread, I’ll take that position.
(I need practice arguing, because I often have conviction without identifiable or linearly structured reasons.)
Don’t practice arguing. Practice thinking. When you believe the right things for the right reasons, good arguments will follow. I explicitly try not to be too good at arguing (with myself or with others), so that I can’t persuade people of things without having done the hard work of figuring out what I rationally must believe and how strongly I must believe it.
(Edit: In case it might have come across the wrong way… I didn’t mean those first three sentences to sound condescending at all, as though I was saying you in particular need to practice those; just stating my view on what we should want to be good at.)
I would not have argued in a rhetorical way, but would have tried to present the most thoughtful arguments that I could identify. (In other words, it would be a way of thinking about the issue.)
The exercise is to relate to the position and then experience (from a relatively detached viewpoint) how the position is taxed or not taxed by the counterarguments. I think this would be an interesting exercise to see if your arguments would work with someone who (in theory) holds your friend’s position but is willing to absorb the counter-arguments.
Thanks for your advice, but it doesn’t resonate so much for me. I’m not fearful of persuading people of things, but I need to learn how to persuade myself. In particular, what you wrote seems backwards. Believing the right things for the right reasons follows from developing and recognizing good arguments, and I need more skill in developing arguments.
Are you offering to argue for a position you do not sincerely hold? Because there are lots of people in the world who disagree with you about something if all you want is to practice arguing.
I suppose I was making some small gamble that I would be able to relate to the friend’s position. Although this could bear some objective testing, I think that I am pretty flexible about what I can be sincere about. Also, I have some undeveloped ideas about this issue and I thought it would be interesting to develop them, in one direction or the other.
I agree with you about the norm for this community, but I’m surprised you didn’t include Suber’s class of examples: ‘refutations’ by analyzing criticism as behavior instead of argument. (PaulWright pointed this out.) It seems like a clear and familiar set of examples.
On another subject, I’m thinking that there’s another occasional source of logical rudeness: arguments ‘to make people think’. This generally takes the form of dancing back, simply because a proponent suggesting arguments they do not believe for a position that they do not hold feels no compunction when violating their burdens of going forward.
Anyone who says “I was only trying to make you think” is not worth another second of your attention.
It’s trolling, for sure.
As far as I can recall I have only heard this said by trolls, and it usually triggers a ban.
In real life I’ve heard it said by ignorant blowhards. Trolls understand that they are speaking insincerely in order to cause trouble; blowhards are just bullshitting and don’t care whether what they say is true or not.
“Playing devils advocate” is the non-insidious way to describe this tactic when it’s done in good faith, presumably with the other party being aware that one is not defending their true position or attacking those they necessarily think are false.
“I was just trying to make you think” definitely sets off alarms of back-pedaling and status dismissal. But, I’ve just as often heard “playing devils advocate” used for the same back-pedaling purpose. Though “playing devil’s advocate” seems to be used in bad faith more often than “just trying to make you think” when someone doesn’t care whether what they say is true or not. Sometimes it can be disguised as trying to help someone by training their ability to refute arguments. This can lead to the Advocate trying to “win” the argument with false beliefs and then updating the other party toward falsity as a sort of dominance. Or they can bail out with their status intact if it looks like they can’t win. Kind of like claiming that what has become a fight is actually only sparring once it’s gotten too intense. But only because they intentionally didn’t commit to either fighting or sparring but were instead gauging their ability to win, and without realizing it. In this way they can be sure to never lose an argument. I did this sometimes when I was younger, before I knew better. Coming out of the argument with my status intact was a higher priority than coming closer to the truth.
Trolls who would benefit by claiming to be “playing devil’s advocate” instead of “just trying to make you think”, will. Making a good faith effort and Intentionality are everything for this.
Devil’s advocacy is declared in advance.
And I for one am against it.
I’m not sure this is fair. It can be useful to ask someone supporting a theory how that theory would respond to a particular objection, even when you don’t agree with the objection. Hearing how someone responds can give you more information about the theory and the person’s beliefs, and they may have a response that you hadn’t thought of.
That is legitimate explorations of the nature of a belief. What I am referring to is—for example—an atheist deciding to argue that God exists to all comers.
The underlying issue is what we take the purpose of debate or discussion to be. Here we consider discourse to be prior to justified belief; the intent is to reveal the reasonable views to hold, and then update our beliefs.
If there is a desire to justify some specific belief as an end in itself, then the rules of logical politeness are null; they have no meaning to you as you’re not looking to find truth, per se, but to defend an existing position. You have to admit that you could in principle be wrong, and that is a step that, by observation, most people do not make on most issues.
This is clearly exacerbated on issues where beliefs have been pre-formed by the point at which people learn the trappings of logical argument; heuristic internal beliefs are most likely to be defended in this fashion. The only community standard that seems to be required is that people are willing, or preferably eager, to update f the evidence or balance of argument (given limited cognition) changes.
Or that people endorse that idea in the abstract enough to adopt standards about logical rudeness which can be recognized and applied in the heat of argument. People are less likely to evade that way if they know everyone else will say “you lost”.
I see this sort of thing often. Often rather than simply swapping back and forth, someone will go through a long chain of switches, going from arguing for A, to arguing for B, then for C, then for D, and then finally going back to A, and forgetting that we ever discussed it in the first place. In fact if you analyze the torture / dust speck discussion you will see that many people did this very thing.
Bonus points for, instead of forming a circle, leading the chain off topic and forgetting whatever we were discussing in the first place.
Could you please describe in general terms what notes for a post like this look like?
I used to do this quite often. Usually in personal conversations rather than online, because I would get caught up in trying to win. I didn’t really notice I was doing it until I heard someone grumbling about such behavior and realized I was among the guilty. Now I try to catch myself before retreating, and make sure to acknowledge the point.
So not much to add, other than the encouraging observation that people can occasionally improve their behavior by reading this sort of stuff.
Added to post: Offering a non-true rejection is also logically rude.
Does intent matter? It would seem that there are some cases where offering a non-true rejection is more analogous to being confused than being rude.
I agree, obviously, that often offering a non-true rejection is rudeness. But ‘true rejections’ are sometimes hard to really nail down.
Intent matters a great deal on this one.
which I’m pretty sure I first found here, HT
Glad you liked it.
Suber seems to concentrate on tactics where one person avoids responding to the argument by making some statement about the arguer (“you’re saying that because of your hopeless confirmation bias!”) That sort of rudeness is a potential problem if someone has a belief which includes explanations of why other people don’t believe it. I’m not sure what to do to about that, since I certainly have such beliefs. As far as I can make out, if I want to avoid being rude, I end up having to respond to arguments against my belief even though I think those arguments aren’t reason the arguer doesn’t share my belief.
Your example of people who concede Y but then switch to Z reminds me of When Theism is Like an M.C. Escher Drawing.
[edit: remove spurious “aren’t’]
The rudeness that Eliezer is talking about seems different from the rudeness that Suber is talking about.
What Eliezer is talking about might be grouped under the heading “Failure to keep score.” The interlocutor refuses to acknowledge that a point has been undermined. Maybe the interlocutor pretends that the point was never made. Or maybe the interlocutor returns to the point as though it had never been undermined.
What Suber is talking about is the kind of rudeness where you refuse to play the game altogether. In reply to arguments, you don’t even pretend to address them. Instead, you say, “According to my position, I don’t even need to address your arguments on their merits.”
I’m generalizing Suber’s lovely term in a way that seems appropriate; if he actually objects to this, I’ll find a different term. But it seems that once you go so far as to coin a lovely term like “logically rude”, then doing nothing but questioning the other person’s motives is just a specialized kind of logical rudeness.
Just to be clear, I myself wasn’t objecting.
The Parable of the Pawnbroker discusses a form of logical rudeness. I’m afraid I no longer remember which Less Wrong person directed me to that article!
I posted that link (the full text is on the talkrational forums) some time ago in reply to a thread here—I would have to look up when.
When people sense the weaknes of their position, they often dance the argument to a different question so they can “win”.
I often want to say “hey, can you give me the win on this one?”
This post about sums up my history debating intellectual property on the Mises blog.
NSK: [New post] Hahaha! Look at this recent event! This CLEARLY shows how horribly destructive intellectual property is, by its very nature!
me: But, look at all this evidence and theory about how it still makes us all better off, on net.
NSK: pfft! Only utilitarians care about that kind of thing!
The argumentative technique you cite—arguing X on the basis of Y, Y is defeated, switch to arguing X based on Z, Z is defeated, switch back to Y—looks like an example of what I call position dancing.
Would you agree that there seems to be a large overlap between “logical rudeness” and rhetorical deception?
If “rhetorical deception” means “Dark Arts”, then yes.
Oooh yes, I knew there was something back in the LW wiki about this and I just couldn’t think of it. I’ve added a link to the “Dark arts” wiki page in the main Rhetorical deception article (under “Reference”).
Isn’t this just simply equivocating?
(Edit) Upon further reading of the link you provided… This is pretty typical behavior from the masses. And, it is a form of irrational dancing where they just tend to be so opaque in their responses that they make no sense.
In the post you (Eliezer) made about the “click” (or people just “Getting” it), the conversation he recalled where a woman was discussing “magic” was one which usually involves this sort of logical rudeness (except in the case Eliezer recounted, the woman was rational enough to see his point).
Usually, the conversation would decay into a dance involving “what is real, anyway”, or “That doesn’t matter, because I still have these experiences,” which is followed by an explanation of why those experiences don’t show at all what the person asserting their truth thinks, only to have them say “Well, then, that doesn’t matter because I just know” (Or have faith, etc.)
The weakening of claims is just another part of this dance, where they will just carry on as if nothing has happened (if anyone has ever read any of the threads in the Faith & Religion section of Richard Dawkins’ or Sam Harris’ web site forums, these sorts of behavior are very well known).
In my attempts, on those forums to try to promote more rational thought, I have found in maddeningly difficult to get the basic concepts of logic and critical thought across. They always assume that they know logic, even when the phrase Modus Tollens/Ponens means nothing to them. These people don’t even understand the terms Proposition or Axiom as logical terms, nor how to recognize or create one of these logical structures.
So, rather than try to counter their claims, I usually will try to just teach them how to make a logical argument. It is at least the first step, even if they are really still nursing on the baby-bottle of logic and rationality.
That’s a kind of problem that I witness regularly when I argue with someone, and it is indeed very frustrating (and I’ve to admit that in the “heat” of debate, it happens to me to commit it too, not the Y->Z->Y but the Y->Z without conceding a local defeat, and I usually only realize I did it afterwards, and then I feel bad… but I’m working on it).
Like other very valid points you posted on other articles (taboo, semantic stopsigns, …) it’s very interesting to know them, they help a lot to understand “what went wrong” in a debate that didn’t reach an agreement (or at least, which didn’t manage to make any of the two change their minds the slightest). But it’s very hard to use them to improve the quality of the debate.
I tried playing the “taboo” but without the full explanation of the rationalist taboo, well, it’s very easy to inconsciously switch back to a synonym—to cheat instead of forcing to reduce. I fear the same with the other points… and saying to someone during a debate “well, go read the Sequences on LW and come back when you are done” is also a form of logical rudeness.
Anyone knows methods or ways to use the knowledge acquired on LW to actually improve the quality of a debate (and by that I don’t mean “to win the argument” but “to increase the chance that at least one of the two changes a bit his mind”) ? But I fear it’s a problem of inferential distances and there is no real solution to it...
It’s orthogonal to most of the Sequences, but I have found good results from “meta-teaming up” with my debate partner. That is, after a short debate long enough to expose both our views, I stop the other person and ask them to go meta and help me pick apart my argument. This usually earns enough goodwill to overcome “this guy’s my opponent” long enough for us to then team up and pick apart their argument. Where you focus this picking-apart is how the debate will succeed: if you focus on your areas of disagreements, you get mind-changing—if you focus on prior assumptions, you get a better picture of the person’s mental model—and so on.
I learned this from code duels: when two people disagree over the best way to write some function or program, they go off and both do it, then come back and compare notes. I was impressed by how they don’t simply run their code and point at faster response times, but actually work through the logic and see where their two solutions differ. This seemed obviously applicable to arguments, and it’s worked for me so far.
Caveat: I’ve only tried it a few times, there may be some “fails badly” moment I’m not aware of yet.
While discussing hypothetical situations or speculating towards the reasoning behind some rule or behavior, my fiancée will sometimes put forward an idea with an obvious flaw in it. I will point out this flaw with the expectation of her correcting it to improve the idea, but instead she will respond with “Well, what do you think, then?” Often I don’t have a hypothesis of my own, and will admit it, but she seems to think that it’s unacceptable of me to criticize an argument if I don’t have a better one to replace it. In general, am I being logically rude by pointing out that an idea is illogical if I don’t have a replacement to offer, or is it acceptable to say, “I don’t know, but I know THAT’s wrong”?
Wow. This generated a lot of comments really quickly. Well, it’s an intriguing post and I like the kind of self-description of your attitude towards arguments. Anyone who has spent a lot of time arguing understands that the mechanisms of discursive exchange are crucial for achieving a valuable discussion. Sadly, it’s very hard to deal with people who don’t understand this. I’ve had friends who just hated to disagree about anything.
Interestingly, the sort of thing that irritates you, such as making concessions, are very useful for getting somebody to argue or discuss something with you when they are reluctant to engage. So I think there is a place for a bit of honorable evasiveness in the argumentative domain, if only because it can make your opponent more pliable and open to disagreement.
On the other hand, I strongly agree with you that opinions and platforms should be held out forcefully and concretely. Your post is really poignant on that subject and excellent. At times, it may feel like you are being a little overconfident, but beliefs should always be stated forcefully and directly because first things first, you’re trying to convey an assessment. You never get anywhere without a theory, even if that theory must eventually be broken.
I cannot parse this sentence.
I parse it as “Many people avoid changing their minds by changing the subject as a way of avoiding thinking about the point that should cause them to do so.”
Just… a lot of failure involves changing the subject at the right (wrong) time. It’s most noticeable as a repeated rhetorical event, but I’m sure it happens in silent thought all the time.
This is a common tactic and it has more to do with people defending their beliefs than with any attempt to get to the truth of a matter. But are you doing any different if you keep engaging these people and complaining about their tactics? If getting to the truth is important to you, then arguing with people who do this isn’t going to help; maybe you’re actually just defending your beliefs too, rather than arguing with someone who will help you toward the truth.
You can’t resolve only to argue with neo-rationalists and abandon the 99.99999% of the population who are either not rationalists or are Traditional Rationalists; this will effectively protect you from most counter-arguments and is a move towards the cult attractor. I think that erring on the side of arguing often will help you get to the truth.
And you definitely shouldn’t resolve to argue with no one at all. Even those who claim to be rationalists often defend themselves in this way.
Upvoted, but it’s not always so clear as that in practice; some people do respond well to being redirected, and it can be rewarding to have discussions with them even if it is a bit more work.
As I understand it, logical rudeness is entirely a defensive system. This may be because it is being used to preserve status. As such, a satire-style argument should be immune to it. If you can advocate support for a more extreme position, or unpalatable consequences of their own position, then they can cheerfully gain status while being the aggressor, and they might not mind so much if it turns out they’re fighting their own position. Of course, it can’t be too obvious what you’re doing.
It helps if you can point to a specific person that actually holds that position. You can also get them to distance themselves from the position they’re attacking, eg by hinting they might agree with the low status “those crazies”. Also, you can more easily point out that you don’t actually agree with that guy either.
Of course, it’s a little dark artsy to pretend you’re holding a position you’re not.
Sorry I am not sure you were really asking for or need my input but here it is.
If you intend to stick out your neck then maybe you have given me permission to suggest that negotiation of a meaningful reality/connection acceptable to ones self and the other could be just as daring and perhaps of similar ultimate utility as argument and debate. I don’t really know if this works but it seems to me that the people I like I “serve” understand that “I’ll let you be in my dreams if I can be in yours,”
Negotiation in this context is from the book “Getting to Yes”
I also liked Dr. Xavier Amador’s work in the field of anosognosia where he uses a LEAP method of working with people: Listen, Empathize, Agree and form a Partnership the hard part is waiting until the other person requests your guidance.
Watch this video of Richard Dawkins debating a creationist and take a drink every time she says “So what I would go back to...”
http://www.youtube.com/watch?v=US8f1w1cYvs
When Dawkins starts trying to psychoanalyze his opponent he really starts to look like the one being logically rude. At this point he has lost the high ground in the argument. He might be right about his diagnosis about her “emotional agenda”, but since he asked where she studied science, shouldn’t she be equally entitled to ask him where he studied clinical psychology?
This video is a good example of logical rudeness, but not only from one of the participants.
It seems to me that Dawkins is the first to shift the “argument”, when he asks “Where did you study science”; and yet again when he brings up the “emotional agenda”.
This isn’t to defend the creationist’s blabbering, just saying—sauce for the goose is sauce for the gander.
Sorry I am not sure you were really asking for or need my input but here it is.
If you intend to stick out your neck then maybe you have given me permission to suggest that negotiation of a meaningful reality/connection acceptable to ones self and the other could be just as daring and perhaps of similar ultimate utility as argument and debate. I don’t really know if this works but it seems to me that the people I like to “serve” understand that “I’ll let you be in my dreams if I can be in yours,”
Negotiation in this context is from the book “Getting to Yes”
I also liked Dr. Xavier Amador’s work in the field of anosognosia where he uses a LEAP method of working with people: Listen, Empathize, Agree and form a Partnership the hard part is waiting until the other person requests your guidance.
Logical rudeness happens all the time. It was here on LW, when Eliezer Yudkowsky said:
And I commented it
He never retracted his claim, nor answered it.
http://lesswrong.com/lw/vp/worse_than_random/
Yeah, also that other time when someone said something I didn’t like, that was a perfect example of just the sort of irrationality we’re talking about! And on a site called “Less Wrong”, well, it’s ironic!
Okay, so how can you do better than random at guessing which uranium atom will blow up next?
This is I think orthogonal to the point I’m trying to make. To say it again with a straight face (which I should probably have done the first time, as per my comments in the niceness thread about bad arguments being harder to state with a straight face):
Surely the proper reading of this post is not that everyone is obliged to reply to every comment that criticises their position. On a site like this, one must make a decision on which criticisms to reply to.
More generally, this is the tu quoque fallacy. Whether or not Eliezer is a paragon of perfect rationality and a flawless exemplar of his own advice is a different question to whether the advice is good.
Of course the advice is good. In that particular case, Eliezer was exaggerating; his intention was to say that randomness wasn’t necessary in order to achieve the best result, not that it is always possible to achieve a result better than what randomness achieves. Why he didn’t respond, I don’t know. But it is certainly true that the very logical rudeness that Eliezer is talking about is common on Less Wrong as well as elsewhere, and I do find it annoying that practically everyone is talking about it as something that only other people do.
I suppose that must have happened sometime, but next time you find yourself postulating this as part of an explanation, please stop, notice, and feel a little confused.
Actually, that goes for everyone in this thread deconstructing my supposed mistake, based on (a) a misquotation (b) not realizing that every algorithm which can be “improved by randomizing” can in fact be improved further by derandomizing (except in cases where an intelligent opponent can predict a given set of bits if they are produced by a “deterministic” process, but not predict the same bits if they are produced by a “random” process). I sometimes make mistakes, but I also sometimes don’t, and if you can hold both possibilities in mind without it destroying your internal critic, it will probably help in the long run.
Ok, but there some random algorithms which cannot be improved by derandomizing, precisely because the random algorithm does just as well as any deterministic algorithm: for example, if there is some event that has an exact 50% chance of happening, all algorithms, random or not, do equally well at guessing whether it will happen or not.
In other words, such a case doesn’t satisfy the condition that the algorithm can be “improved by randomizing.”
I take it that such an algorithm couldn’t be improved in accuracy, but I expect any randomized algorithm would be more cycle-intensive than a constant rule of “guess that event X will happen”—which will perform just as well.
Eliezer:
Unknowns:
I think a claim that a randomized algorithm is never better than some deterministic one is interesting and probably true (possibly not, which is why it’s interesting). Is Eliezer really making an even stronger claim than this? Is any precise claim being made? The meaning of “improving an algorithm by adding (or removing) randomization” is pretty vague. Improving in what way? Isn’t any non-randomized algorithm technically a member of “randomized algorithms”? If not, can’t it be made technically randomized at no asymptotic cost?
Good points. I didn’t see it as tu quoque, though; I saw it as an opportunity for Eliezer to qualify his heuristic and bring it more in line with reality.
It seems to me that “logical etiquette” requires us to respond to the very strongest arguments made by our opponents as well as to read arguments reasonably. Without going back to this discussion, it’s hard to tell who is being rude.
For example, suppose I make the following argument:
“American law schools always charge ridiculously high tuition because they can get away with it. The availability of generous student loans creates artificial demand and gives law schools the ability to charge exorbitant tuition. We need to reform the student loan program to fix this problem.”
If somebody responds by simply saying “well, I know a law school in Nebraska which charges $500 per year so your premise is wrong,” they are being rude. Because obviously when I said “always,” I really meant “generally speaking.” Further, the person is not really responding to the substance of my argument.
I have no idea if this is what happened with Eliezer and Thomas. I’m just saying.
I went and looked at the original post—it’s very long, but does actually address Thomas’ question:
Given the length of the post, I think the most reasonable assumption is that Thomas had forgotten that that particular point had been covered by the time he reached the end of it. I know I had by the time I saw his original question.
That’s really pretty ridiculous. You can try to speak precisely. Why should we all concede that hyperbole is acceptable in an argument?
If you want to argue about student loans you could: approach it from another side or focus on elite/private law schools. Overstating your case only works when preaching to the choir. Then, it misinforms and makes you less credible to others.
I’m not saying that hyperbole is acceptable. But if I engage in hyperbole, it’s still rude to nitpick the hyperbole while ignoring the strongest part of the argument. In this case, the argument still stands if one substitutes “generally speaking” for “always.”
Sure, but it’s difficult to be sufficiently precise at all times. It’s rude to seize upon an inprecision to dismiss an argument while ignoring the main thrust of the argument.
I’m trying to make the point that its easy to jump on (especially glaring) imprecision. Your general thrust is weakened, often unfairly, by its presence. It can be a bummer for an argument if people jump on imprecise things, but hopefully you can stop that before it happens by omitting them in the first place.
I agree. But at a certain point, you have to rely on the other fellow to be reasonable in interpreting what you say.
To illustrate, it takes a lot of time and effort to formulate something like this:
It’s a lot easier to simply say “the sky is blue.” Any reasonable person understands what you mean.
This seems like the perfect place for the person making the claim about student loans to make a concession (demoting their “always” to “almost always”) thus making their debating partner more comfortable to listen to the meat of their argument; but it is also necessary not to take that demotion from “always” to “almost always” as defeat of the entire argument.
I basically agree, and in this situation my response would be something like “I concede that not all law schools charge ridiculously high tuition but I think my basic point stands.
Sometimes that sort of precision adds too much length. If you see an easily-fixed problem with an argument, it behooves you to point out the fix in the same comment as the problem.
I think that is fair. That would be the reasonable thing to do in a debate.
Precision in this case is not any longer (i.e. always vs typically). It can at times, but for people down with logic, you’d think always versus there exists, etc. would be a big deal.
Actually in this instance it’s made more precise just by leaving out the word “always”.
Hmm, you don’t think omitting it also implies all schools do it?
No, compare “cats have fur” vs “cats always have fur”.
By the way, please note that I did reply to this question once Adelene asked it. This is because I have Tim Tyler labeled as hopeless, but not Adelene.
Thomas != Tim Tyler
[Relevant Link]
I take it you talk to people and not ideas/arguments?
I do exactly the same; you can’t answer everything, and it’s effective to let who is asking help you decide.
Effective—trying to effect what?
Well, your purpose in replying to comments, which might be a combination of several things including enlightening yourself, enlightening others, or entertainment of yourself or others. Even if your motives were entirely based around yourself and others being less wrong, you would still be wise to look at who is asking when considering whether to reply.
In general I wouldn’t set out such criteria out loud, but if it were me in these particular circumstances I’d be very tempted.
To be precise, the claim was that when you can improve an algorithm by randomizing it, you can improve it further by derandomizing it. Assigning probabilities over uranium atoms isn’t even a random task in the first place—you just lawfully calculate some numbers, and state those numbers out loud as your probabilities; not a single randomized step in the cognitive algorithm. If you said “bet everything on a random direction” it would be inferior. This is all stated very clearly in the original page.
In other words, tim_tyler (WRONG: this was Thomas, and as some observed, FAIL) was misstating my previous position (as a quotation no less!) and entirely failing to get the entire point, as usual. But this is not logical rudeness. It is poor reading comprehension.
I regard being quoted as saying something I never said as offensive—it’s injuring others via poor reading comprehension, and can be very simply counteracted by going back to the original text and quoting only things that people actually say. So I will go on to state the following: It’s actually a worthy question why reading comprehension seems to dissociate from ordinary intelligence—why there are people like Phil Goetz who consistently misunderstand almost everything said to them, without being able to realize it, learn the lesson from repeated experience, or do better by trying harder, in spite of their intelligence otherwise seeming not below that of people with much better reading comprehension. I’m leaning toward the notion that they get an initial idea in their heads of what someone else seems to be saying, and then discard all data showing that the person is saying something else—confirmation bias to the point where it destroys the ability to read, though not, alas, write.
Other people have pointed this out, but they forgot to add:
FAIL
Wow. True.
I suspect a perfect scan would have literally showed that my brain was reading “Thomas” as “Tim_Tyler” every time.
Are timtyler and Thomas the same person?
Of course not.
[Relevant Link]
Yep, I figured that out eventually.
This may be the case for the instances you’re thinking of, but doesn’t seem to cover all instances if poor reading comprehension. I regularly converse with someone whose comprehension difficulties appear to stem from a poor short-term memory, for example.
This seems to be another point in favor of having a less-meta subreddit or forum—we could share resources about that kind of peripheral skill there.
By knowing which uranium atoms belong to which isotope, and are therefore less stable, we can achieve better than random guessing.
But yeah, I’m sure you can define the problem narrowly enough that it’s impossible to improve on randomness—but then you’re switching sides, enlisting your intellectual energy in the service of entropy and decay rather than order and progress, so I’m not sure it’s fair to allow that.
Also… is it fair to call it “logical rudeness” if someone just doesn’t notice that you’ve made a point? (Or is perhaps tacitly agreeing and doesn’t see the need for further comment, even.) It would be nice if forum software (especially here!) had a flag so that commenters could say “this is a point in need of answering”, and others could be alerted that there was a point which hadn’t been answered. See this for some expansion on this idea.
Answered here; this seems to have been the case. (I disagree on the ‘switching sides’ view—that idea seems not to be constructed in a rational way.)
This sounds like a very good idea.