How do you notice when you’re rationalizing?
How do you notice when you’re rationalizing? Like, what *actually* tips you off, in real life?
I’ve listed my cues below; please add your own (one idea per comment), and upvote the comments that you either: (a) use; or (b) will now try using.
I’ll be using this list in a trial rationality seminar on Wednesday; it also sounds useful in general.
- Mar 2, 2012, 12:20 PM; 10 points) 's comment on Rationality Quotes March 2012 by (
- Mar 3, 2012, 12:25 AM; 10 points) 's comment on People who “don’t rationalize”? [Help Rationality Group figure it out] by (
- How do you notice when you’re procrastinating? by Mar 2, 2012, 9:25 AM; 9 points) (
- Mar 2, 2012, 2:13 PM; 7 points) 's comment on How do you notice when you’re procrastinating? by (
- Mar 5, 2012, 7:26 PM; 4 points) 's comment on Request for input: draft of my “coming out” statement on religious deconversion by (
- Mar 3, 2012, 3:39 AM; 4 points) 's comment on People who “don’t rationalize”? [Help Rationality Group figure it out] by (
- Apr 20, 2012, 9:07 PM; 2 points) 's comment on Please Don’t Fight the Hypothetical by (
Cue: Any time my brain goes into “explaining” mode rather than “thinking” (“discovering”) mode. These are rather distinct modes of mental activity and can be distinguished easily. “Explaining” is much more verbal and usually involves imagining a hypothetical audience, e.g. Anna Salamon or Less Wrong. When explaining I usually presume that my conclusion is correct and focus on optimizing the credibility and presentation of my arguments. “Actually thinking” is much more kinesthetic and “stressful” (in a not-particularly-negative sense of the word) and I feel a lot less certain about where I’m going. When in “explaining” mode (or, inversely, “skeptical” mode) my conceptual metaphors are also more visual: “I see where you’re going with that, but...” or “I don’t see how that is related to your earlier point about...”. Explaining produces rationalizations by default but this is usually okay as the “rationalizations” are cached results from previous periods of “actually thinking”; of course, oftentimes it’s introspectively unclear how much actual thought was put into reaching any given conclusion, and it’s easy to assume that any conclusion previously reached by my brain must be correct.
(Both of these are in some sense a form of “rationalization”: “thinking” being the rationalization of data primarily via forward propagation to any conclusionspace in a relatively wide set of possible conclusionspaces, “explaining” being the rationalization of narrow conclusionspaces primarily via backpropagation. But when people use the word “rationalization” they almost always mean the latter. I hope the processes actually are sufficiently distinct such that it’s not a bad idea to praise the former while demonizing the latter; I do have some trepidation over the whole “rationalization is bad” campaign.)
Yeah, that’s what it feels like to me, too.
Yo, people who categorically or near-categorically downvote my contributions: assuming you at least have good intentions, could you please exercise more context-sensitivity? I understand this would impose additional costs on your screening processes but I think the result would be fewer negative externalities in the form of subtly misinformed Less Wrong readers, e.g. especially in this case Anna Salamon, who is designing rationality practices and would benefit from relatively unbiased information to a greater extent than might be naively expected. Thanks for your consideration.
I don’t normally downvote your contributions (and indeed had just upvoted one), but I downvoted this one for whining about downvotes. (Especially as its parent is actually at +13 right now—maybe it was at −3 or something when you originally wrote the above, though.)
Anyone who categorically or near-categorically downvotes your contributions is unlikely to be swayed by a polite request for them not to do so.
Of course, but in this case it seemed deontologically necessary to request it anyway; I would feel guilty if I didn’t even make a token effort to keep people from being needlessly self-defeating. This happens to me all the time: “if it’s normally distributed then you should just straightforwardly optimize for the median outcome” versus “a heavy-tailed distribution is more accurate or at least acts as a better proxy for accuracy, we should optimize for rare but significant events on the tails”. I feel like the latter is often the case but people systematically don’t see it and subsequently predictably shoot their own feet off.
ETA: “To not forget scenarios consistent with the evidence, even at the cost of overweighting them; to prioritize low relative entropy over low expected absolute error, as a proxy for expected costs from error.”
If you consider it important that certain contributions not be unfairly downvoted, and you consider it likely that making those contributions under your name will result in them being unfairly downvoted, it would seem to follow that you consider it important not to make those contributions under your name. No?
It does follow but I might still take the lesser of two evils and post it anyway. It’s true that if I used a different name that would have been strictly better; for some reason that idea hadn’t occurred to me. (Upvoted.) In retrospect I should have found the third option, but in practice when commenting on LW I’m normally already feeling as if I’ve gone out of my way to take a third option and feel that if I kept on in that vein I would get paralyzed and super-stressed. Perhaps I should update once again towards thinking harder and more broadly even at the cost of an even greater risk of paralysis.
Well, I endorse nonparalysis.
That said, sometimes thinking dilemmas through after I’ve made (and implemented) a decision and then, if I find a viable third option, noting it in my head so that it comes to mind more readily the next time I’m faced with a similar decision, can get me broad thinking and nonparalysis.
Thats because your sentences are badly formed.
As a debater (i know how much you guys hate debating and testing your theories), i have to do allot of explaining, and if i include any flaws or fallacies from critical thinking (something else you dont know about), then i know, and my opponent should know, that this is a mistake. So to illuminate them helps to create a logical explanation. And that is our goal.
Too bad no one here seems to know diddly squat about critical thinking OR debating.
What brings you to Less Wrong, CriticalSteel2?
Cue for noticing rationalization: I find my mouth responding with a “no” before stopping to think or draw breath.
(Example: Bob: “We shouldn’t do question three this way; you only think so because you’re a bad writer”. My mouth/brain: “No, we should definitely do question three this way! [because I totally don’t want to think I’m a bad writer]” Me: Wait, my mouth just moved without me being at all curious as to how question three will play out, nor about what Bob is seeing in question three. I should call an interrupt here.)
It’s probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn’t aware of this in real time.
Sometimes this happens for me when the person just said (or is about to say) an invalid argument that I’ve heard before and know exactly how to correct/retort.
This one is a big clue for me too.
Cue: I find that I have multiple, independent reasons to support the same course of action. As with Policy Debates Should Not Appear One-Sided, there is every reason to expect multiple lines of evidence to converge on the truth, but there is no reason to expect multiple independent considerations to converge on the same course of action.
You can find multiple, independent considerations to support almost any course of action. The warning sign is when you don’t find points against a course of action. There are almost always multiple points both for and against any course of action you may be considering.
When I can’t explain my reasoning to other person without a feeling of guilt that I am slightly manipulating them.
I am better at noticing when I lie to other people, than when I lie to myself. Maybe it’s because I have to fully verbalize my arguments, so it is easier to notice the weak parts that would otherwise be skipped. Maybe it’s because when I talk to someone, I model how would they react if they had all the information I have, and this gives me an outside view.
When I feel relief that I did not have to change my point of view after thinking through something. (” representing said that , therefore I should continue supporting ”)
Orthogonal: Noticing when you’re rationalizing is good, but assuming that you might be rationalizing and devising a plan of information acquisition and analysis that is relatively resilient to rationalization is also a safe bet.
A rationalization-robust algorithm like that sounds like a good subject for a discussion post.
I noticed that there is a certain perfectly rational process that can feel a lot like rationalization from the inside:
Suppose I were to present you with plans for a perpetual motion machine. You would then engage in a process that looks a lot like rationalization to explain why my plan can’t work as advertised.
This is of course perfectly rational since the probability that my proposal would actually work is tiny. However, this example does leave me wondering how to separate rationalization from rationality possibly with excessively strong priors.
What’s happening there, I think, is that you have received a piece of evidence (“this guy’s claims to have designed a perpetual motion machine”) and you, upon processing that information, slightly increase your probability that perpetual motion machines are plausible and highly increase your probability that he’s lying or joking or ignorant. Then you seek to test that new hypothesis: you search for flaws in the blueprints first because your beliefs say you have the highest likelihood of finding new evidence if you do so, and you would think it more likely that you’ve missed something than that the machine could actually work. However, after the proper sequence of tests all coming out in favour, you would not be opposed to building the machine to check; you’re not opposed to the theoretical possibility that we’ve suddenly discovered free energy.
In rationalisation, at least the second and possibly both parts of the process differ. You seek to confirm the hypothesis, not test it, so check what the theoretical world in which the hypothesis is unarguably false feels like, maybe? Checking whether you had the appropriate evidence to form the hypothesis in the first place is also a useful check, though I suppose false positives would happen on that one.
In rationalization you engage in motivated cognition, this is very similar to what happens in the perpetual motion example.
The twinge of fear. When I came across Bruine de Bruin et al 2007 as a cite for the claim that sunk cost lead to bad real-world consequences (not the usual lab hypothetical questionnaires), I felt a twinge or sickness in my stomach—and realized that I had now bought thoroughly into my sunk cost essay and that I would have to do an extra-careful job reading that paper since I could no longer trust my default response.
Cue: The conclusion I’m investigating is unlikely to be correct, as in I feel that I couldn’t have enough understanding to solve the problem yet, to single out this particular conclusion, and so any overly specific conclusion is suspect and not worth focusing on. It’s the “sticky conclusion” effect: once considered, an idea wants to be defended.
Do you generally find it useful to semi-anthropomorphize ideas/representations/drives like that?
Particularly in programming, it’s very useful to think of elements of a project (of varying scale) as knowing certain things, understanding certain ideas, having certain skills or ensuring certain properties. Once you have enough skill, many subtasks can be thought of as explaining some idea to the program or teaching it to do new tricks. (I’m not talking about anything AI-related, just normal software development.)
(My statement you quoted is somewhat wrong, it could instead either act as referring to the hypothetically more common rationalization dynamic that I don’t observe much, or else should be changed to “once considered, an idea wants to take up more attention than it’s probably worth”.)
Cue: I feel emotionally invested in a particular fact being true. I feel like a blue or a green. May be related to Anna’s point about ugh fields.
Let’s also try the converse problem: what cues tip you off, in real life, that you are actually thinking, and that there is actually something you’re trying to figure out? Please stick such cues under this comment.
Cue of not rationalizing: I feel curious—a chasing, seeking, engaged feeling, like a cat chasing a mouse.
Cue of not rationalizing: I find myself thinking new thoughts, hearing new ideas (if talking to someone else), and making updates I’m interested in—even if they aren’t about the specific decision I’m trying to make. The experience feels like walking through a forest looking around, and being pleasantly surprised by parts of my surroundings (“oh! maybe that is why clothes with buttons and folds and detail are more fashionable”).
Cue of not rationalizing: I feel smart. I feel like many different parts of my brain are all engaged on the question at hand. It feels like what I’m saying could be turned into mathematics or a computer program.
Cue of not rationalizing: I don’t know what conclusion I’ll come to.
Cue for actual thinking: I spontaneously produce lines of research, not lines of argumentation. I feel relief when I can classify a problem as “already decided by empirical data” and I begin to seek that data out.
Cue of not rationalizing: I am picturing the two possible outcomes (from deciding correctly/incorrectly), feel fear (lest I end up with the bad outcome), and find my attention spontaneously on the considerations that seem as though they’ll actually help with that.
Cure of not rationalizing: I feel confident that I’m not rationalizing. If I feel uncertain about whether or not I’m rationalizing then it can go either way and it’s a sign I should be on guard, but if I don’t feel any note of self-deception then that’s normally pretty strong evidence that I’m not rationalizing. I’m not sure if this would be true for others; I’ve definitely seen people who claimed they felt that they were not rationalizing blatantly rationalizing.
Cue for noticing rationalization: I notice that I want a particular conclusion, and that I don’t actually anticipate updates from the considerations that I’m “thinking through” (if alone) or saying/hearing (if conversing).
The first part is the big one for me. When I’m reading a study and see “the results of our experiment were...” and I know what I want to see next, warning bells go off.
Cue: I say the word “clearly”, “hopefully”, or “obviously”.
This definitely doesn’t always indicate rationalization, but I say one of those words (and probably some others that I haven’t explicitly noticed) with much greater probability if I’m rationalizing than if I’m not, and saying them fairly reliably kicks off an “ORLY” process.
Oooh, I just realized: if you can pick out words like this when you’re writing, and you can code in your text editor, then your computer can catch this cue for you. (like writegood-mode in emacs). I’ve had this prevent ugly sentences in technical writing, but catching “rationalization words” in written sentences would also be useful when, say, writing a journal, or writing down an argument.
More awesomely, the highlighting is immediate in that mode, so you get tight feedback about the sentences you’re typing.
Cue: Noticing that I’m only trying to think of arguments for (or against, if i’m arguing with someone) some view.
Self-supplication is a strong indicator of rationalization in me. Phrases like, “at least”, “can’t I just”, “when can I...” are so far always indicators that I’m trying to get myself to do something I know I shouldn’t or stop doing something I know I should be doing.
Cue: I have an “ugh field” across part of my mental landscape—it feels almost like literal tunnel vision, and is the exact same feeling as the “ugh field” I might get around an unpaid bill. (Except this time it’s abstract; it descends out of nowhere while I’m having a conversation about whether we should hire so-and-so, or what SingInst strategy should be, or whatever).
Wait, how do you notice an ugh field? I only notice these when I’m viewing myself “in far mode”, by trying to explain thoughts or emotions that I have, but wouldn’t expect “someone like me” to have.
Huh, really? For me it feels, um, the way I described it above—I go to think about an important-email-I-need-to-send, or whatever, and I feel physically tense, and averse, and if I wasn’t paying attention it used to be that before I’d exercised any conscious control I’d be checking the LW comment feed or something, to move away from the painful stimulus. And if it’s a bad case, and something I’m not yet thinking about, it feels as though my eyes stop moving freely across my field of vision,and as though I’m kind of trapped or something, and as though whatever boring thing is in front of me is suddenly intensely absorbing [so that I won’t have to think about that other thing]...
… oh! Thanks, that quite clarifies a few things.
Specifically: I notice my rationalization in the presence of obvious fear or disgust, but hadn’t been thinking of strong negative affects as “ugh fields.” Before your reply, I hadn’t verbally noticed that being suddenly interested in marginal details was a good sign of ugh fields, and thus a likely warning sign for rationalization.
Cue for noticing rationalization: My head feels tired and full of static after the conversation.
When I get into a particular negative emotional state, I make up reasons that I feel that way. When I start a sentence with a bitter “Fine!” or “I should have known better,” it’s a guarantee that the next statement out of my mouth (“you obviously don’t care”, “there’s no point in cooking for you because you hate food,” etc.) will be something I know to be false but that, if it were true, would explain why I feel rotten. Physically, the cue is me turning away from the person I’m speaking to. The actual explanation, “I like tomatoes, and I want you to like them too” or “I’m tired” or “My brain chemistry might be out of whack” are not as satisfying to say out loud as a condemnation of the other person.
(nods) Boy, am I familiar with this one.
That said, I have found that saying the actual explanation with all of the nonverbal signals of the false-but-satisfying condemnation is startlingly satisfying for me. My closer friends have learned to accept it as a legitimate move in our conversations; most of them respond to my explanation and ignore my bodyparl.
It puzzles third parties, though.
So a haughty back-turn combined with “I had a bad day and I’m taking it out on you” is satisfying? Hmm, I’ll have to try that.
Yup. Dunno if it works for anyone else, but it delights me that it works for me.
Cue: I feel low status. In low status mode, I feel I’m on the defensive and I have something to prove. In high status mode, I think there are two effects which help avoid rationalization:
If I feel confident in what I’m saying, then I can entertain the opposing arguments because I don’t feel I have anything to fear from them
If I feel high status, I feel less afraid of being forced to change my mind
Edit: ah, Vladimir_Nesov said something similar on the other thread
I notice I’m rationalizing when after I lose my first defensive argument, I have another lined up and ready to go. If that continues in succession (2 or 3), then I have to stop and rethink my position.
I have a thought in my head that tells me “I’m being irrational”, and I try to shut it off, or not care about it. Usually this leads to an increased level of frustration and anger.
When in my mind I already realized I’ll have to change my opinion, but I don’t want to admit that right now, during the argument.
I think I perceive pattern in my speech, possibly a change of tone in my voice, certain pattern of thoughts (repetitive) and goals (convincing others/self).
That’s helpful. Can you be more specific?
Hard to be specific about the exact wetware patterns, but I think I can hear myself sounding like a salesman :).
Another signal is not recognizing any validity to opponent’s argument (or ‘recognizing’ it just for show). Most of the time when you’re “right” you should come to the conclusion that someone is very or even overwhelmingly wrong, but almost never completely wrong (unless the argument is purely mathematical).
Cue: my internal monologue contains the sequence ”… which is obviously just a rationalization, but...” and then proceeds as if it were true.
I never notice. Either I don’t do it, either I just don’t see it.
But I see other people rationalizing a lot.
Have you tried asking those close to you (significant others, close colleagues, long-term best friends) when they see you rationalizing?
People persistently disagreeing with me is one sign.
The more numerous they are, the smarter they are, and the more coherent-sounding their arguments, the more likely it is that there’s a problem at my end.
Of course, I often ignore such signals. For example, I still think that global warming is good, that our distant ancestors were probably clay minerals, that memetics is cool, that there will be a memetic takeover—and lots of other strange things.
Cue for rationalising: I feel like I’m ‘rocking back’ or ‘rolling back down the bowl’. Hmm. Let me clarify.
A ball in a bowl will roll around when the bowl is shaken, but it goes up the side, reaches a zenith, and rolls back down. Similar for trying to scale a steep hill at a run; you go up, reach a zenith, come back down. And again for balance: you wobble in a direction, find a point of gaining balance, and return to center.
The cue is feeling that in a conversation, discussion, or argument. We sort of roll around discussing things, something comes up that I rationalise away, and it feels like I’ve regained my balance: the conversation tipped towards something, wobbled a bit, and then rocked back to center. Often it’s in the form of reaching that zenith as a lull in the conversation, and I come in with “[reason], so we don’t need to worry about that”, where [reason] is rationalised.
Cue: feeling of urgency, of having to come up with an counterargument to every possible criticism of my reasoning before someone raises it (I sometimes think by imagining a discussion with someone, so I don’t have to necessarily anticipate actually talking about it for this feeling to come up).
Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).
The feeling that I’m getting into some sort of intellectual debt. Kind of like a little voice saying “you may say this fits with your understanding of the situation, but you know it doesn’t really, and if it doesn’t come back to bite you, it’s because you’re lucky, not because you’re right. Remember that, bucko.”
Quite why it uses the word “bucko” is beyond me. I never do.
Cue: my internal monologue (or actual voice) is saying one thing, and another part of my mind that doesn’t speak in words is saying “bullshit”.
What does it feel like, when it says that? How do you notice?
It feels the same as when I feel that someone else is bullshitting.My mind thinks it can recognize “good” and “bad” arguing styles (presumably learned from lots of training data). I can skim through an essay and find points where it feels like it’s slipping into irrationality without necessarily paying attention to what all the words are saying.
As for how I notice—I just do. My mind brings it to my attention. I assume there are things I can do to train this ability or put myself in the right mood for it, but that’s not something I’ve explored.
Sorry if this wasn’t useful. It’s hard to describe how something feels from the inside and have it come out coherent.
Cue: An argument that is being advanced for or against a conclusion doesn’t distinguish it from its alternatives (mostly notice in others).
Cue for noticing rationalization: In a live conversation, I notice that the time it takes to give the justification for a conclusion when prompted far exceeds the time it took to generate the conclusion to begin with.
That is nearly always true, not just when a person is rationalizing. If you are correctly performing incremental updates of your beliefs, going back and articulating what caused your beliefs you be what they currently are is time consuming, and sometimes not possible, except by careful reconstruction.
I personally rarely find that to be evidence of rationalization; instead it generally means that I know various disjunctive supporting arguments but am loth to present any individual one for fear of it being interpreted as my only or strongest argument.
Cue for noticing rationalization: I feel bored, and am reciting thoughts I’ve already though of.
Whenever I start to get angry and defensive, that’s a sign that I’m probably rationalizing.
If I notice, I try to remind myself that humans have a hard time changing their minds when angry. Then I try to take myself out of the situation and calm down. Only then do I try to start gathering evidence to see if I was right or wrong.
My source on ‘anger makes changing your mind harder’ was ‘How to Win Friends and Influence People’. I have not been able to find a psychology experiment to back me up on that, but it has seemed to work out for me in real life. It suggests that, if you think someone else is rationalizing, then the first step to changing their mind is to get them to be calm. It also seems to suggest that calming yourself down and distancing yourself from whatever generated the rationalization (a fight, a peer group, your parents, etc.) is what you need to do to work through a possible rationalization.
For rationalizing an action—I realize that the action is something I really want to do anyway. And then I figure (right or wrong) that all the reasons I just thought up to do it are made up, and I try to think about why I want to do it instead.
For me, it’s caring about means vs. ends.
Rationalizing: I’m thinking about how to justify doing something I’m already comfortable with; adhering to a plan or position I’ve already accepted. So I invent stories about how the results will be acceptable, that I couldn’t change them anyway, that if they are bad it will have been someone else’s fault and not mine, etc.
Not rationalizing: I’m thinking about the ends I’m trying to accomplish. I’m not committed to an existing plan or position. I’m curious about what plan will achieve the desired results. If the results turn out bad, I’ll critique my own plan and change it (I suppose we call that lightness) rather than blaming someone else.
Like some other people have said, one of my biggest tip-offs is if I have a strong negative reaction to something. Often this happens if I’m reading a not-particularly-objective report, experiment, treatise or something which could have been written with strong biases. My mind tends to recoil at the obvious biases and wants to reject the entire thing, but then the rational part of me kicks in and forces me to read through the whole thing before parsing an emotional reaction. After all, a point of being rational is to be able to sieve through other writers’ biases to see if they actually have important points buried inside, otherwise you don’t know what accuracies you might miss, or what biases you’re giving into yourself. I find this also happens if I myself have a bias that I’ve never thought of before; I instantly have a gut reaction that feels at first natural, but almost an instant later, very out of place. Then I realise that I haven’t taken in all the information, I haven’t evaluated it as objectively as I can, and I haven’t arrived at an accurate conclusion. I’ve just gone. “No, x is bad because y makes me feel bad!”.
The other thing (perhaps more importantly for my personal wellbeing) would be if I actually perform an action that is inconsistent with my rationalist world-view, such as if I did something illogical or something that would contradict a view that I hold as being morally and rationally just. These are usually things that I’d permit myself to do without thinking much when I was younger, but now would seem like abhorrent double-standards and ‘unjust’ behaviour (perhaps they’d even make me feel disproportionately guilty, but it seems to me that often acting illogically should make you feel that way!)
I feel really right about whatever it is, with many clever and eloquent arguments, but I also sorta know that I can’t quite epistemologically justify it.
(Note that I may actually be right—that doesn’t mean I wasn’t just rationalising.)
Cue: Any time I decide that it wouldn’t be worth my time to closely examine counterarguments or evidence against a live hypothesis. This normally takes the form of a superficially plausible cost/benefit analysis with a conclusion along the lines of “if you examined every single crackpot theory then you wouldn’t have any time to look at things that mattered”, which can largely only be justified by a rationalized working model of how mental accounting works.
The problem is not with ‘rationalization’. Many mathematical proofs started with a [unfounded] belief that some conjecture is true, yet are perfectly valid as the belief has been ‘rationalized’ using solid logic.
The problem is faulty logic; if your logic is even a small bit off on every inference step, then you can steer the chain of reasoning towards any outcome. When you are using faulty logic and rationalizing, you are steering into some outcome that you want. When you are using faulty logic and you are actually thinking what is true, then you just accumulate error like a random walk, which gives much smaller error over time.
Other issue—most typically people who are rationalizing are not the slightest bit interested in catching themselves rationalize.
edit: to clarify. You may have a goal of winning a debate, not caring what is the true answer, and come up with an entirely valid proof, if for bad reasons. You may also have a goal of winning a debate, be wrong, and make up some fallacious argument, neglect to update your belief, et cetera. That happens when you are not restricting yourself to arguments that are correct. Or you may have a goal of winning a debate, be wrong, and fail to make an argument because you are successfully restricting yourself to arguments which are correct, and don’t use fallacies to argue for what you believe in anyway.
In mathematics, the reasoning is fairly reliable, and it doesn’t make a slightest bit of difference if you are arriving at a proof because you wanted to know if conjecture is really true, or because you wanted to humiliate some colleague you hate, or because you wanted not to lose debate and didn’t want to admit you’re wrong. With unreliable reasoning, on the other hand, you are producing mistakes whenever you are rationalizing or not, albeit when rationalizing you tend to make larger mistakes, or become a mistake factory. Still, you may start off with good intention to find out if some conjecture is true, and end up making a faulty proof, or you may start off with a very strong very ill founded belief about the conjecture and get lucky to be right, and find a valid proof. You can’t always trust the arguments that you arrived at without rationalizing more than the ones you arrived at when rationalizing.
Why is this getting down voted?
I’ve noticed my ability to predict my comment karma has gone down in the last few months, the weirdest example being some of my comments fluctuating about ten to fifteen points within a single day, prompting accusations that I had a bunch of sockpuppets and was manipulating the votes. It’s weird and I don’t have any good explanation for why LW would suddenly start voting in weird ways. As the site gets bigger the median intelligence tends to drop; does anyone know if there’s been a surge in registration lately? It’s true I’ve seen an abnormal amount of activity on the “Welcome to Less Wrong” post. I feel like we might be attracting more people that aren’t high IQ, autistic, schizotypal, or OCPD, which would be sad.
I generally treat the votes as a mix of indication of clarity and of how much people like/dislike the notion.
Didn’t expect any negatives on this post though; then I assumed something wasn’t clear and added the edit. I still think that my point is entirely valid though. Good logic does not allow you to make a fallacious argument even if you want to, and bad logic often leads you to wrong conclusions even when you are genuinely interested in knowing the answer; furthermore the rationalization process doesn’t restrict itself to assertions that are false.
Perhaps the concern is that the reduction to a logical question, i.e. to the set of premises and axioms, was the faulty part? After all a valid argument from false premises doesn’t help anyone, but I’m sure you know that.
Because someone’s rationalizing something maybe ;)