What’s your “rationalist arguing” origin story?
By rationalist arguing, I mean the sort of argument where you’d rather have the right belief at the end than “win.” This is quite different from the usual rationalization competition style of argument, and is pretty rare. But somehow, it happens on LessWrong a lot. It goes by other names too, but I am contractually obligated to put the word “rationalist” before as many other words as I can.
Back when I was a teenager, and I mean like 13-16, I frequented an online political forum, which shall remain unnamed. It was about as rational as one might expect, which is to say awful. And eventually, after three years visiting there, I stopped. The first mover of me moving out was that I’d figured out the moves on the rhetorical chessboard—if I make an appeal to intuition here, they can make an ad hominem there, and so on and so on. And once this arguing became transparent, it also became boring—instead I started actually asking the question “how can I figure out what’s true?” And so I stopped arguing politics, which had been a pretty big habit of mine a week before.
Cut forward some years. I’m a LessWrong reader. And somewhere after my first few months of posting I figure out that I really shouldn’t be trying to “win arguments” here—after all, I stopped doing that when I was 16, right? Instead, I decide, I should practice “cooperative arguing,” where the goal is to work together (dialectically, though I wouldn’t have known to call it that at the time). And if you’re doing it really right, you should be able to “lose” (in quotes because you’re learning from it and don’t give a crap) about half the time. Don’t “try to lose,” but “try to be able to lose.” There’s a reflection of HPMOR in there, but it’s in a funhouse mirror.
Since then I’ve advanced a bit, so I can look at creating environments that help this happen, but this post was titled “origin story” and that’s what I’m curious about. For me, getting familiar with rhetoric and then getting fed up with it was necessary before I could go on to actually try to be able to lose. But it seems a little specific—there seem (perhaps) to be plenty of other people who follow the same rules as me on here, and you can’t all have frequented political forums during your forumative years (sorry). Or can you?
Was LW key in germinating this “rationalist arguing,” or did we have the seeds already within us? Memory says the latter, but statistics says the former. Or perhaps it’s just a selection effect, and old people who have figured all this out just didn’t tell me. Or they did and I didn’t comprehend.
Gonna need data. What’s your “rationalist arguing” origin story?
- 3 Sep 2012 10:40 UTC; 4 points) 's comment on Open Thread, September 1-15, 2012 by (
The Autobiography of Benjamin Franklin had a good effect. Though I don’t really follow it, it has helped me somewhat in getting me focused on being something other than a Defender of the Faith in an argument.
Good old Ben seems an effective instrumental rationalist. If you’re going to argue, look to what you’re trying to achieve and behave accordingly. I’d add to look to what value you can get from the opportunity. Correcting the other guy has next to no value to you in itself.
-- Autobiography of Benjamin Franklin
For me, the breaking point occurred when I became a salesman. After about a month of rigorously working in that environment, it was just too easy to convince someone that an idea was true or false when it was beneficial for me to do so. Moreover, I realized that those same techniques were being used by my bosses on me.
I’d joked before about how easy it was to manipulate people, and I’d always cared about what was true (for as far back as I can remember, it was drilled in at a very early age) but that was the point where I stopped really caring about who “won” in an argument, because it broke “winning” down to rhetoric and manipulative technique. A few months later, I discovered LW, which helped break me of some bad beliefs I had at the time. But the shift from “winning arguments” to “finding truth” definitely happened when I got out of sales.
I think it’s often true that when person X recommends you to use some manipulation technique on others, the person X is using the same technique on you. This is a good reason to avoid cooperation with dishonest people, even if you have no ethical concerns. Also known as: “There is no honor among thieves”.
(The implication does not work the other way; people usually use manipulation techniques without being explicit about them.)
When I went to college, I joined a philosophical debating group. We didn’t score debate; the only way to win was by winning over a convert or by being converted yourself. When people stood for office, they’d often be asked “Have you ever broken someone on the floor?” and “Have you ever been broken on the floor?” (we had a pretty pugilistic affect so “breaking on the floor” was our way of saying “get questions following your speech that either changed your mind or made you really uncertain).
Until I found LW, it was the first culture I was part of where it was really honorable to admit error, but not in a cringe-y “who can really know?” way. The debate group was also the first place I met really smart people who disagreed with me on obvious-feeling things, so I got a lot more curious about which of us should break.
Weird brain. Dead friend. Intermittently suicidal ex-girlfriend. ’Nuff said.
Well, I don’t think I do “rationalist arguing” especially.
But I do think I’m OK at approaching disagreements as cooperative efforts at communication, rather than as debates to be won. I try to adopt the attitude that winning a discussion is like winning a game of catch.
No particularly interesting story, just a gradual realization that debate wasn’t getting me anything I wanted, regardless of whether I won or lost, so it was time to try other things, and the place to start was figuring out what I did want. Which, after a while, turned out to primarily be that I wanted to be able to walk away from a disagreement understanding what the other person believed, why they believed it, and what precisely it was we disagreed about… and, secondarily, to feel confident that they understood what I believed, why I believed it, and what we disagreed about.
After a while I mostly gave up on that second goal, as it became increasingly clear that mostly people weren’t interested in that, but they were usually quite happy to tell me all about their own beliefs.
I’m still getting the hang of it, really; it’s not uncommon for me to get sufficiently caught up in a disagreement that I end up trying to win it.
I suspect that a part of my attraction to truth was that my social skills were not good enough to make me a decent liar. For other people lying seemed to work often, but it did not work for me. I was not able to predict what people wanted to hear, and to say it plausibly.
So in my model of the world, knowing the truth seemed like the best strategy for me. In reality, it worked well in math and computer programming, but I failed to translate it into instrumental rationality. I was not able to deal with the amounts of existing disinformation myself, and I did not find other rational (outside the laboratory) people near me.
Interesting! Did you want to lie as much as the people it seemed to work for, or did you want to lie just a little? Or was it really some other group of skills you wanted, which just resulted in lies incidentally?
Lying is a part of “social skills”. It is not a necessary part—you can usually avoid expressing opinions on topics where speaking truth would harm your image—but it helps to make people feel better and do what you want.
I suppose most people are lying without being aware that they are lying; compartmentalization probably helps a lot. (But because I am not a good liar, you should not trust this hypothesis—perhaps I am missing something very important here.)
I don’t know what would the optimal amount of lying. It’s probably context-dependent.
I’m not sure it counts as an origin story, but after I noticed a lot of discussions/arguments seemed to devolve into arguments about what words meant, or similar, I got the idea this was because we didn’t ‘agree on our axioms’ (I’d studied some maths). Sadly, trying to get agreement on what we each meant by the things we disagreed on didn’t seem to work—I think that the other party mostly considered it an underhanded trick and gave up. :(
Vanity. People who try to win at all costs and never back down dig themselves into holes make fools of themselves.
Im pretty sure this was my orifinal/default style of arguing.
I mostly only argue to win for sport or for winning memetic battles.
I have no idea when I first start becoming a rationalist. I think I was curious about things, so I would keep acquring new intellectual ideas that are interesting to me. Gradually I came around to the idea of rationality and that winning the debate is not the same thing as being right.
Arguing against god(s) circa 9 years of age or so.