Feeling Rational
Since curiosity is an emotion, I suspect that some people will object to treating curiosity as a part of rationality. A popular belief about “rationality” is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. System 2’s deliberative judgments aren’t always true, and System 1’s perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality.” Both systems can serve the goal of truth, or defeat it, depending on how they are used.
For my part, I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on mistake-producing epistemic conduct. “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.” Conversely, an emotion that is evoked by correct beliefs or truth-conducive thinking is a “rational emotion”; and this has the advantage of letting us regard calm as an emotional state, rather than a privileged default.
So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: “That which can be destroyed by the truth should be.” My dreaming self’s happiness was opposed by truth. My sadness on waking is rational; there is no truth which destroys it.
Rationality begins by asking how-the-world-is, but spreads virally to any other thought which depends on how we think the world is. Your beliefs about “how-the-world-is” can concern anything you think is out there in reality, anything that either does or does not exist, any member of the class “things that can make other things happen.” If you believe that there is a goblin in your closet that ties your shoes’ laces together, then this is a belief about how-the-world-is. Your shoes are real—you can pick them up. If there’s something out there that can reach out and tie your shoelaces together, it must be real too, part of the vast web of causes and effects we call the “universe.”
Feeling angry at the goblin who tied your shoelaces involves a state of mind that is not just about how-the-world-is. Suppose that, as a Buddhist or a lobotomy patient or just a very phlegmatic person, finding your shoelaces tied together didn’t make you angry. This wouldn’t affect what you expected to see in the world—you’d still expect to open up your closet and find your shoelaces tied together. Your anger or calm shouldn’t affect your best guess here, because what happens in your closet does not depend on your emotional state of mind; though it may take some effort to think that clearly.
But the angry feeling is tangled up with a state of mind that is about how-the-world-is; you become angry because you think the goblin tied your shoelaces. The criterion of rationality spreads virally, from the initial question of whether or not a goblin tied your shoelaces, to the resulting anger.
Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them. Sometimes we run away from strong feelings by denying the facts, by flinching away from the view of the world that gave rise to the powerful emotion. If so, then as you study the skills of rationality and train yourself not to deny facts, your feelings will become stronger.
In my early days I was never quite certain whether it was all right to feel things strongly—whether it was allowed, whether it was proper. I do not think this confusion arose only from my youthful misunderstanding of rationality. I have observed similar troubles in people who do not even aspire to be rationalists; when they are happy, they wonder if they are really allowed to be happy, and when they are sad, they are never quite sure whether to run away from the emotion or not. Since the days of Socrates at least, and probably long before, the way to appear cultured and sophisticated has been to never let anyone see you care strongly about anything. It’s embarrassing to feel—it’s just not done in polite society. You should see the strange looks I get when people realize how much I care about rationality. It’s not the unusual subject, I think, but that they’re not used to seeing sane adults who visibly care about anything.
But I know, now, that there’s nothing wrong with feeling strongly. Ever since I adopted the rule of “That which can be destroyed by the truth should be,” I’ve also come to realize “That which the truth nourishes should thrive.” When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light—I know that I can never truly understand it, and I haven’t the words to say. Despite all my philosophy I am still embarrassed to confess strong emotions, and you’re probably uncomfortable hearing them. But I know, now, that it is rational to feel.
- Why Our Kind Can’t Cooperate by 20 Mar 2009 8:37 UTC; 291 points) (
- Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) by 10 May 2023 19:04 UTC; 254 points) (
- Less Wrong NYC: Case Study of a Successful Rationalist Chapter by 17 Mar 2011 20:12 UTC; 188 points) (
- Crisis of Faith by 10 Oct 2008 22:08 UTC; 175 points) (
- Guardians of Ayn Rand by 18 Dec 2007 6:24 UTC; 118 points) (
- Shut Up and Divide? by 9 Feb 2010 20:09 UTC; 114 points) (
- Here’s the exit. by 21 Nov 2022 18:07 UTC; 97 points) (
- Incremental Progress and the Valley by 4 Apr 2009 16:42 UTC; 97 points) (
- The “Intuitions” Behind “Utilitarianism” by 28 Jan 2008 16:29 UTC; 83 points) (
- Cynical About Cynicism by 17 Feb 2009 0:49 UTC; 72 points) (
- Upcoming LW Changes by 3 Feb 2016 5:34 UTC; 71 points) (
- Which Parts Are “Me”? by 22 Oct 2008 18:15 UTC; 66 points) (
- Commentary On “The Abolition of Man” by 15 Jul 2019 18:56 UTC; 64 points) (
- Is Humanism A Religion-Substitute? by 26 Mar 2008 4:18 UTC; 62 points) (
- Lawful Creativity by 8 Nov 2008 19:54 UTC; 60 points) (
- Bind Yourself to Reality by 22 Mar 2008 5:09 UTC; 60 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- Zen and the Art of Rationality by 24 Dec 2007 4:36 UTC; 53 points) (
- The “Adults in the Room” by 17 May 2022 4:03 UTC; 50 points) (
- Effortless Technique by 23 Dec 2007 4:22 UTC; 40 points) (
- Is the AI timeline too short to have children? by 14 Dec 2022 18:32 UTC; 38 points) (
- 18 Feb 2023 8:39 UTC; 33 points) 's comment on EA, 30 + 14 Rapes, and My Not-So-Good Experience with EA. by (EA Forum;
- Interpersonal Morality by 29 Jul 2008 18:01 UTC; 28 points) (
- My Time As A Goddess by 4 Jul 2023 13:14 UTC; 28 points) (
- 26 Feb 2023 10:34 UTC; 26 points) 's comment on On Philosophy Tube’s Video on Effective Altruism by (EA Forum;
- Zen and Rationality: Trust in Mind by 11 Aug 2020 20:23 UTC; 25 points) (
- Rational feelings: a crucial disambiguation by 13 Mar 2010 0:48 UTC; 23 points) (
- Singletons Rule OK by 30 Nov 2008 16:45 UTC; 23 points) (
- 23 Nov 2012 5:19 UTC; 22 points) 's comment on Open Thread, November 16–30, 2012 by (
- 11 Nov 2021 5:35 UTC; 19 points) 's comment on Discussion with Eliezer Yudkowsky on AGI interventions by (
- 21 Jul 2019 8:20 UTC; 17 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- Notes on Rationality by 16 Jan 2022 19:05 UTC; 16 points) (
- 25 Oct 2013 23:05 UTC; 15 points) 's comment on Less Wrong’s political bias by (
- Repeat Until Broke by 13 Aug 2020 7:19 UTC; 14 points) (
- Feeling (instrumentally) Rational by 16 May 2024 18:56 UTC; 14 points) (
- 4 Nov 2014 9:54 UTC; 13 points) 's comment on Open thread, Nov. 3 - Nov. 9, 2014 by (
- 12 Mar 2009 20:18 UTC; 13 points) 's comment on Raising the Sanity Waterline by (
- Something Is Lost When AI Makes Art by 18 Aug 2024 22:53 UTC; 13 points) (
- Another Call to End Aid to Africa by 3 Apr 2009 18:55 UTC; 11 points) (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- 8 Apr 2013 21:51 UTC; 11 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 21 Jan 2010 18:57 UTC; 10 points) 's comment on That Magical Click by (
- [SEQ RERUN] Feeling Rational by 3 Jun 2011 13:28 UTC; 10 points) (
- 26 Nov 2011 8:06 UTC; 9 points) 's comment on Communicating rationality to the public: Julia Galef’s “The Straw Vulcan” by (
- 22 Apr 2011 23:33 UTC; 8 points) 's comment on The Black Team—A Parable of Group Effectiveness by (
- 7 Mar 2011 6:00 UTC; 8 points) 's comment on Positive Thinking by (
- 22 Jun 2015 1:38 UTC; 8 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 30 Sep 2010 21:35 UTC; 8 points) 's comment on Eight Short Studies On Excuses by (
- 19 Jan 2010 20:47 UTC; 8 points) 's comment on Normal Cryonics by (
- Agency and Life Domains by 16 Nov 2014 1:38 UTC; 8 points) (
- 22 Feb 2014 11:10 UTC; 8 points) 's comment on Is love a good idea? by (
- 25 Feb 2010 3:02 UTC; 8 points) 's comment on Babies and Bunnies: A Caution About Evo-Psych by (
- 18 Jan 2010 3:00 UTC; 8 points) 's comment on A Suite of Pragmatic Considerations in Favor of Niceness by (
- Something Is Lost When AI Makes Art by 18 Aug 2024 22:53 UTC; 7 points) (EA Forum;
- 20 May 2012 7:18 UTC; 7 points) 's comment on Is a Purely Rational World a Technologically Advanced World? by (
- 10 Feb 2014 11:54 UTC; 7 points) 's comment on Open Thread for February 3 − 10 by (
- 16 Feb 2014 21:48 UTC; 7 points) 's comment on A defense of Senexism (Deathism) by (
- 6 Feb 2010 22:29 UTC; 7 points) 's comment on Debunking komponisto on Amanda Knox (long) by (
- 12 Mar 2009 19:32 UTC; 7 points) 's comment on Raising the Sanity Waterline by (
- 28 Mar 2013 19:10 UTC; 7 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 30 May 2010 14:48 UTC; 6 points) 's comment on Abnormal Cryonics by (
- 22 Jul 2020 9:59 UTC; 6 points) 's comment on Self-sacrifice is a scarce resource by (
- 10 Jun 2015 8:22 UTC; 6 points) 's comment on Open Thread, Jun. 8 - Jun. 14, 2015 by (
- 22 Mar 2009 20:38 UTC; 6 points) 's comment on You’re Calling *Who* A Cult Leader? by (
- 9 Jun 2010 18:57 UTC; 5 points) 's comment on Open Thread June 2010, Part 2 by (
- 9 Oct 2009 22:24 UTC; 5 points) 's comment on I’m Not Saying People Are Stupid by (
- 4 Jun 2008 7:26 UTC; 5 points) 's comment on Why Quantum? by (
- 22 Dec 2012 22:01 UTC; 5 points) 's comment on Which Parts Are “Me”? by (
- 25 Mar 2013 12:18 UTC; 5 points) 's comment on How to Not Get Offended by (
- 21 Apr 2012 3:35 UTC; 4 points) 's comment on Defense Against The Dark Arts: Case Study #1 by (
- 8 Apr 2011 23:54 UTC; 3 points) 's comment on Guilt: Another Gift Nobody Wants by (
- 2 Nov 2009 12:13 UTC; 3 points) 's comment on Open Thread: November 2009 by (
- 1 Oct 2011 11:41 UTC; 3 points) 's comment on Who owns LessWrong? by (
- 27 Mar 2015 10:08 UTC; 3 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 11 Nov 2009 17:20 UTC; 3 points) 's comment on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions by (
- 31 Mar 2011 5:42 UTC; 3 points) 's comment on Advice in fighting depression? by (
- 27 Feb 2014 13:03 UTC; 3 points) 's comment on Is love a good idea? by (
- 30 Mar 2009 10:16 UTC; 3 points) 's comment on Most Rationalists Are Elsewhere by (
- 24 Apr 2009 7:19 UTC; 2 points) 's comment on Escaping Your Past by (
- 22 Apr 2009 19:24 UTC; 2 points) 's comment on Welcome to Less Wrong! by (
- 30 Sep 2015 13:24 UTC; 2 points) 's comment on Rationality Quotes Thread September 2015 by (
- 12 Sep 2012 8:53 UTC; 2 points) 's comment on Meetup : Brussels meetup by (
- 29 May 2010 11:46 UTC; 2 points) 's comment on Abnormal Cryonics by (
- Meetup : Frankfurt Meet-Up by 4 Jul 2015 16:45 UTC; 2 points) (
- 23 Jan 2010 16:02 UTC; 2 points) 's comment on Rationality Quotes January 2010 by (
- 6 Apr 2023 20:05 UTC; 2 points) 's comment on awg’s Shortform by (
- 24 Feb 2020 9:06 UTC; 2 points) 's comment on Tsuyoku vs. the Egalitarian Instinct by (
- 7 Nov 2011 4:45 UTC; 2 points) 's comment on Let Your Workers Gather Food by (
- 10 Dec 2011 10:01 UTC; 2 points) 's comment on Handling Emotional Appeals by (
- Meetup : Frankfurt Meetup Revival by 20 Jun 2015 11:00 UTC; 2 points) (
- 1 Dec 2012 8:30 UTC; 2 points) 's comment on One thousand tips do not make a system by (
- 2 Jan 2011 1:36 UTC; 2 points) 's comment on Choose To Be Happy by (
- No Universal Probability Space by 6 May 2009 2:58 UTC; 2 points) (
- 4 May 2009 17:29 UTC; 2 points) 's comment on The mind-killer by (
- 5 Jun 2011 7:55 UTC; 1 point) 's comment on [SEQ RERUN] Universal Fire by (
- 24 May 2013 15:39 UTC; 1 point) 's comment on How to Build a Community by (
- 13 Apr 2015 10:15 UTC; 1 point) 's comment on Open Thread, Apr. 06 - Apr. 12, 2015 by (
- Meetup : Meetup #7 - Becoming Less Wrong by 21 Nov 2016 16:50 UTC; 1 point) (
- 27 Apr 2009 19:38 UTC; 1 point) 's comment on Excuse me, would you like to take a survey? by (
- 26 Oct 2012 2:25 UTC; 1 point) 's comment on [Link] Offense 101 by (
- 9 Jun 2011 19:25 UTC; 1 point) 's comment on [SEQ RERUN] Third Alternatives for Afterlife-ism by (
- 7 Feb 2011 5:08 UTC; 1 point) 's comment on How to Beat Procrastination by (
- 25 Jul 2020 20:02 UTC; 1 point) 's comment on strangepoop’s Shortform by (
- 9 Dec 2013 16:39 UTC; 1 point) 's comment on How to not be a fatalist? Need help from people who care about true beliefs. by (
- 4 Nov 2014 22:49 UTC; 1 point) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 15 Mar 2010 21:58 UTC; 0 points) 's comment on Undiscriminating Skepticism by (
- 24 Sep 2011 18:50 UTC; 0 points) 's comment on Undiscriminating Skepticism by (
- 7 Dec 2010 23:32 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 14 Aug 2011 21:24 UTC; 0 points) 's comment on Cryonics is Quantum Suicide (minus the suicide) by (
- 26 Feb 2014 20:18 UTC; 0 points) 's comment on Rational Evangelism by (
- 26 Feb 2014 21:23 UTC; 0 points) 's comment on Rational Evangelism by (
- 4 Oct 2010 15:14 UTC; 0 points) 's comment on Slava! by (
- 27 Feb 2016 17:31 UTC; 0 points) 's comment on Rationality Quotes Thread February 2016 by (
- 23 May 2011 14:27 UTC; 0 points) 's comment on Rationalists don’t care about the future by (
- 27 Mar 2014 15:41 UTC; 0 points) 's comment on Rationality Quotes March 2014 by (
- 26 Feb 2010 2:09 UTC; 0 points) 's comment on Babies and Bunnies: A Caution About Evo-Psych by (
- 17 Jan 2012 1:21 UTC; -1 points) 's comment on Welcome to Less Wrong! (2012) by (
- [wild nonsense] How about marketing rationality along the lines of “Feeling Rational”? by 31 Dec 2011 15:27 UTC; -2 points) (
- 18 Jan 2011 5:54 UTC; -5 points) 's comment on Trying to hide bad signaling? To the Dark Side, lead you it will. by (
- Discussion: Counterintuitive ways of teaching knowledge by 8 Jul 2011 21:02 UTC; -7 points) (
- In Defense of Moral Investigation by 4 Nov 2012 4:26 UTC; -9 points) (
- 9 Sep 2012 15:03 UTC; -10 points) 's comment on How to deal with someone in a LessWrong meeting being creepy by (
It seems to me that social consensus accepts expression of strong feelings by women, just not by men.
Depends on the culture, I suppose. I’m a Korean woman and I’ve always been scolded for being too extreme in my expressions of emotions growing up.
Is it actually acceptance or just condescending dismissal?
Since they aren’t part of the web of cause-and-effect (so they might be epiphenomenal), are norms impossible to be irrational about?
I don’t think it’s inevitable that having emotion causes irrationality, but I think there is a tendency for it to cloud your mind and restraining yourself is a good idea. Maybe after calmly examining things you can say to yourself “This appears to be an optimum situation in which to freak out”.
I agree that strong emotions can be very appropriate to many situations, but also think there is wisdom in the usual expectation that bias is correlated with strength of emotions. So it is crucial for us to develop better cues for distinguishing more versus less biased emotions.
Well, at the very least women constitute half of society, it’s certainly acceptance within that half. I actually think that it’s actually acceptance more broadly though. Women are arguably not accepted my men in general, but in so far as they are accepted it is only in a few narrow domains, primarily science, engineering, and big business that women do best by adhering to men’s norms. Actually though, emotional suppression is only normative among men in science, in the military, and in low status positions. Enthusiasm (irrational exuberance) is the ultimate business virtue. If one doesn’t claim a level of confidence that can’t possibly be justified one is simply not a contender for venture capital or angel investor money. In a hierarchy, one’s not suitable for upper management or sales. Beyond that, almost all social elites are, in large measure, “emotional expression professionals”. Actors and actresses are the most obvious example of this, but I would say that this is also true of athletes, artists, and other performers and entertainers, religious leaders, and politicians. Al Gore was dismissed with a characterization of “wooden”. Hitler practiced his emotional expressions for hours in front of a mirror.
That’s a really nice view to have on emotions. And frankly, I’ve known it all along but never put it the way you have. Cheers!
What bothers me is that in case of ‘emotional expressions’ in a profession, it is possible to fake it and am sure we have seen examples of such (hypocrites) in our life. But may be in a given situation it is rational to fake it.
PS: Could you give the source of the Hitler example?
It sounds plausible, but I think its something of a premature conclusion. Consider how one would best fake an emotion: simply by motivating oneself to feel that way. Faking an expression is much much harder than simply choosing a field that matches your own moods and preferences. The reason we see people who don’t appear genuine in high ranking positions as well as very low ones is that they are motivated by something other than the above, a drive for excellence or desperation where feelings do become a tool, but thinking in terms of the majority its easier to assume convention and self-discipline makes most peoples professionalism indistinguishable from any other motivator they might feel.
“Consider how one would best fake an emotion: simply by motivating oneself to feel that way.”
Brilliant. I need to remember this phrase.
I’m considering this quote, and also wondering how it would be possible, as most people hold the belief that you can’t feel anything that your heart doesn’t want to feel. Is it irrational to ‘listen to ones heart’? Can you really change your thinking, motivate yourself to change your thoughts and thus change your feelings?
Yep.
Yes. This is called Rational Emotive Behavior Theory, and it was developed by Albert Ellis.
Whenever I notice myself thing “I knew that all along,” it reminds me to check for hindsight bias. Sometimes it is, sometimes it isn’t.
It’s one of the easier biases to catch, once you have that cached pattern set up.
What. Female misogyny seems to be at least as powerful as male, however contradicting it may seem. Women do not generally accept womanhood, it takes a certain subtype of feminists to do so (first wave did _not_, second wave is arguable).
Great points Michael. IE Clinton and “I feel your pain”. . .
This is one of those rare moments where the usually horribly heterodox economist, me, defends orthodox economic theory. So, looked at very closely, orthodox microeconomic says nothing at all about peoples’ preferences themselves, which presumably involve their emotional reactions to various things. What is assumed is certain things about these preferences, that people know what they are, that they exhibit continuity, that they have a degree of internal consistency in the sense of exhibiting transitivity, and it also makes people behave more “rationall” and exhibit continuous demand functions if their utility functions exhibit convexity. So, rationality is not about what your preferences are or the degree to which they are based on one’s emotions. They are that one know what they are, that they have a degree of internal coherence or consistency, and the, the biggie, that people actually act on the basis of their real preferences.
A lot of the problems regarding “irrationality” involve people behaving in internally consistent manners, especially over time. Behavioral economists are now arguing it out whether one should deal with this via multiple personality (or preference systems) models or approaches that stress focusing on “rationality” and keeping mind one’s “real” preferences. Thus, hyperbolic discounting involves “time inconsistency.” I want things now that I shall regret having wanted so much later. I eat the candy bar now and wake up fat later, etc. etc. Is this a combat of two preference systems or just “irrationality,” People like Matthew Rabin who tend to use the latter approch, in fact say that the goal is to have people be “rational,” to know their own real preferences and to act on them. If they really do not mind being fat, then go ahead and eat the candy bar. But in any case, it is perfectly OK either way to have the caring about being fat or not caring about being fat to be based on one’s emotional reactions. One should undertand one’s own emotional reactions. That is rationality.
It seems to me that social consensus accepts expression of strong feelings by women, just not by men.
Traditionaly, women were thought inferior to men precisely because they were thought to have stronger feelings.
It is not thought wise to have anyone “emotional” in any position of importance.
But “emotional” is usually interpreted to mean that your feelings are easily swayed.
“It is not thought wise to have anyone ‘emotional’ in any position of importance.”
By whom? People who would like to “be able to have a beer” with a President?
I think Vassar is a little more accurate here, but that people only apply the lack of emotion within a narrow field that relates to their specialty at work. It would not be beyond the pale to see someone cheering enthusiastically for a sports team, for example.
Your thoughts on this would profit a lot from some reading of recent research in neuroscience—specifically people like D’Amasio, LeDoux, and Ramachandran, Sacks (there are lots others, too). The idea that rationality begins with some ‘asking how-the-world is’ as if that act itself were not completely shot through with emotional responses is hopelessly naive. Without an emotional response, one could never even form the judgment that the world-is-any-particular way. The brain lesion studies on this are pretty clear; it’s an emotional response that both triggers and suffuses the judgments we make about the way-the-world-is. For sure strong emotional responses can get in the way of other emotionally charged inferences (those that are typically thought of as canonically rational), but the whole opposition of emotions and rationality, as if they were in any way exclusive, is wrong headed. There are some emotional responses to situations that we call rational, and there are others that get in the way of those. The normative evaluation of the judgments must be left up to some other valuative metric—e.g., conducive to other emotional attitudes, etc. In a word, Hume was right, righter than even he knew.
Damasio*
As I see it, what’s most important is to make a division between rationality and emotions in terms of where they fit in the equations. Rationality describes the equations, emotions provide a source of evidence that must be applied correctly. If an outcome makes me happy, that should make me desire that outcome more, but not make me think that outcome more likely than if it made me sad (unless, of course, I’m evaluating the probability that I will be motivated to do something).
Unfortunately, I think this model of mind is not how the human mind actually works. Emotions appear to change the equations, not their arguments, so eliminating emotions seems like an appropriate measure to increase the human brain’s approximation of a rational process. Maybe you can allow yourself feel happy or sad at an outcome without it affecting the outcome, but getting to that point may require an unemotional transition period as you change your thinking to match that of a rational process.
Stephan, it is important to establish normative separation between the roles that emotions play in perception (which may be part of the process of establishing truth) and the roles that emotions play in motivation (which should not normatively affect what we believe to be true). Yes, it may be the same emotion doing both things. But that doesn’t change the normative difference in the roles.
When I say “rationality begins with” I am talking about deriving the normative criterion, not about the brain’s real-world temporal order of evaluation.
(And yes, I’m read up on neuroscience to the level you specified.)
It’s my impression that men and women are permitted somewhat different sets of emotions—men are freer to show anger, women are freer to show sadness. And that showing emotion is more permitted now than it was a few decades ago.
As far as I can tell, it’s possible to be emotional (or at least fairly emotional) and logical at the same time, so long as the emotion isn’t territorial attachment to an idea.
The different emotions permitted for different sexes could well be because of evolutionary reasons not just social reasons.
Eliezer: It may be rational to (choose to) feel, but feelings are not rooted in reason. Reason is a consistency-based mechanism which we use to validate new information based on previously validated information, or to derive new hypotheses based on information we have already interned. One can reason with great skill, and one can know a great deal about the reasoning process in general, and yet one’s conclusions may be false or irrelevant if one has not validated all of the basic assumptions one’s reasoning ultimately depends on. But validating these most fundamental assumptions is difficult and time consuming, and is a task most of us do not tend to, as we instead scurry about to achieve our daily goals and objectives, which in turn we determine by applying reason to data and attitudes we have previously interned, which in turn are based ultimately upon basic premises which we have never thorougly investigated.
These are the thoughts that I get after reading your eulogy for your brother, Yehuda. I get the impression that you are too busy studying how to defeat death, to stop and think why death should be bad in the first place. Of course, to stop and think about it would mean opening yourself to the possibility that death might be acceptable after all, which in turn would threaten to annihilate your innermost motivations.
Think about it this way. The past 28 years of your life are already dead, as history is not something living. The future years of your life, meanwhile, have yet to come into existence. You have already lost all of your past; and as soon as you “gain” your future, you already lose it. All you are is but an ever-changing state; the “now” that you inhabit is but an instruction pointer in the registers of a machine that is continually executing instructions.
What do you care if the machine stops processing at any point? Do you think you will notice? Does a program’s thread of execution notice when the OS swaps it out and resumes execution on another thread? Does a thread of execution notice if it is never resumed?
I’m not claiming that we are but software running on the hardware of the universe, but this is what you seem to posit; and if this is so, then death is no more terrible than it’s terrible that the sky appears blue, or that the grasses appear green, or that the Sun appears yellow.
And yet, you seem to believe that death is somehow “horrible”, so you are sad when it takes place; and you believe that other things are somehow “good”, so you are happy when they happen. This seems to be at odds with the things-just-are view that you otherwise represent, and it tells me that these feelings of yours are based on something more fundamental, something more axiomatic than reason. Reason is a consistency vehicle; but these feelings of yours, they are. Reason may help provoke them, but they exist independently of reason. And indeed, such feelings are known to distort the reasoning process in people substantially, causing them to delay validation of critical basic assumptions, thus causing them to reach and stick by invalid conclusions even though their reasoning process may be sound. Garbage in, garbage out.
This reason-distorting effect is why emotions are thought of as at odds with reason. And with good reasons. :)
You so completely miss the damn point that, after downvoting you once for willfully insulting Eliezer’s completely rational and well-expressed intimate feelings, I’d downvote you 10 times for general stupidity. “Reason” can only be driven by an external cause! (whether it’s hedonism, ambition, curiosity, altruism, etc) If all YOU truly cared about was the “things-just-are”, you’d act completely at random, which is hard to conceive even in a mental patient.
We need the truth as a weapon to carve what we want from an uncaring universe and keep it from squashing us. Yet it can only illuminate and clarify our desires, not shape them.
EDIT: after looking at Mr. Bider’s profile and contributions, I have a weak suspicion that he’s a troll. Well, I don’t care about that.
I agree with Mr Bider. Humans get their terminal values from a combination of genetic transmission and cultural transmission. The former has been recently called on this blog the thousand shards of desire. Most people, even most extremely intelligent people, use their intelligence pursuing the values that have been transmitted to them genetically and culturally. What I find more virtuous than raw intelligence is the willingness of the person to turn his intelligence on these values, to question every one searchingly and to be prepared to throw them all out if that is what his intelligence and his studies instruct him to do. (Actually, if you throw them all out, you run into a problem staying motivated, but this is not the place . . .)
How I choose to conquer death is to redefine “me” to include not only my intelligence but also the effects of that intelligence on the world, so that when my body dies and my intelligence ends, “I” continue. The death of a mind is not the end of the world.
I don’t think you can define your way into immortality.
You can say that you care more about the world going on than about your own existence, and that may well even be true; but that isn’t conquering death, it’s just putting the problem in a box and writing “Not Death” on the label.
It seems to me that the basic irrationality implicated here is the assumption that there is such a thing as rationality.
Alright, I just wanted to put that in a clever contenentalist sounding quip but didn’t quite manage. What I mean is this: It (usually) makes sense to talk about beliefs being true or false. We can even talk about tendencies as being more or less inclined to reach true beliefs (given background assumptions about the distributions of such truths). However, implicit in this post and many of the comments that follow is the idea that rationality is some kind of discipline that can be followed and applied.
Or to put the point differently I think many people here are making the implicit assumption that there is some objectively correct way to evaluate evidence (over and above the constraint of simple logical consistency). However, it’s an entirely contingent fact that the sorts of rules we use to predict events in the world around us (scientific induction) actually succeed instead of a world where counter-induction holds (the more times a simple seeming pattern has occurred in the past the less chance it will occur in the future).
Worse, even if you believe that there are some magic objective facts about what the ‘right’ epistemic responses are to evidence at best rationality is a term that can be applied to a particular description, not to a person or a person’s actions. To see why note that I can always describe the same actions by an infinite number of possible rules. For instance suppose my friend asks his computer to spit out a random claim about number theory and decides to put total faith in it’s truth despite a widely accepted supposed proof of the converse. Sounds super irrational but yet the same behavior is also equally well described as saying my friend was following the rule of believing claim X about number theory with probability 1 upon first consideration. Since claim X is in fact a theorem that rule is perfectly rational.
A side-point on the hypothetical universe where counter-induction largely holds. One issue with that world is that if one of that universe’s bizarre inhabitants (assuming such a place somehow supports life) can use induction to discover the counter-induction rules. IE “When one spots an apparent pattern the next result will be the inverse of that predicted by the pattern, I have noticed this pattern historically and its always been good so far.” Which seems slightly paradoxical.
Correct me if I read it wrong, but did you just say that induction doesn’t work? I admit I don’t know how to even begin arguing for induction, so you’ve got a great opportunity; give an actual argument, rather than just saying we live in “a world where counter-induction holds.”
Also, while any phenomenon can be described by an infinite number of math equations, the more complicated ones are less likely to be true. See also, Occam’s Razor. Obviously, this relies on probability theory, which was formulated by induction, but you did say “even if you believe that there are some magic objective facts” which I assumed to be induction, probability theory, etc.
I agree that you don’t have to throw out emotion to be rational. You just have to put it in its proper place. Logical analysis has to be given a higher priority in forming a good picture of events. But once you have done it, emotion is what powers your actions and words, and gives them meaning.
If I did not have millions of years of evolution making me hate death, it would be less meaningful to talk about how much we need cryonics. I would have to appeal to its usefulness in special cases (preserving great minds or useful workers) rather than advocating the abolition of preventable death. It would not be so emotional, or so urgent.
Ironically, the fact that it is so emotional is what leads people in our society to doubt it. They think it promises something they could not hope for it to deliver. Many of them have had to work through the pain of several nearby deaths already, and thus would have to question the assumptions (afterlife, nonpreventability, survival of the species, whatever) that helped them work through it to a state of acceptance.
I agree that it’s not necessarily irrational to feel, but I think the way we feel is clearly irrational. For example, our emotions don’t seem to work in a time-consistent manner, and we often later regret actions that we take based on strong emotions, when those emotions eventually fade away. If we could modify the way our emotions work cheaply and safely, I think many of us would probably take advantage of the opportunity. A rational agent wouldn’t wish to modify its mind like that.
Here’s another, more specific example. I sometimes feel a sense of schadenfreude when someone that I might be in status competition with publicly makes a mistake or suffers a setback of some kind. By itself, this feeling may not be irrational (except perhaps on a group level), but I simultaneously feel a disgust for myself for feeling this way, and wish that I could edit away this ugly emotion. (Until then, I have to spend some effort to keep myself from being overly critical of others.) Would anyone claim that these emotions together do not constitute irrationality?
If you consider lack-of-emotion as just another kind of emotion, it too will not be activated at the best of times. The “irrationality potency” is not in the presence of emotions, but in the imperfection of the way they act.
“For example, our emotions don’t seem to work in a time-consistent manner, and we often later regret actions that we take based on strong emotions, when those emotions eventually fade away.”
There is a rational explanation for this, i will use anger as example: People try to not anger people who easily get angry and violent, so anger has benefits. However this can also cause other people to want to punish angry person for his violence, and here regret comes in and lowers the punishment that angry person gets. Imagine a trial where a man found his wife sexing another man, and hit them both until they were almost dead. Which explanation will lead to a lower punishment? “I do not know what went in to me, and have regretted doing it ever since, i hope they will some day forgive me for losing my mind for a moment” or “By beating them i try to make sure that both my wife and people that know us will not attempt this or any other thing that might upset me badly again”
“P. C. Hodgell said: “That which can be destroyed by the truth should be.” ”
That sounds very much like a quote from a Russian anarchist which was I think mentioned in one of Turgenev’s books, possibly “Fathers and Sons”. Here we go:
“Anything that can be broken, should be broken.” ~Dmitri Pisarev
Solomon says: nothing new under the sun.
Isn’t “by the truth” an extremely important qualifier?! I’m really curious how the two quotes are the same.
Rationality destroys emotions.
Not always.
But here is a way in which it happens: My friends sometimes have very strong emotions that generate behavior that is completely imoral, sometimes self-contradictory etc… The natural response in me used to be to get mad at them back, creating a circle of anger for a while, which eventually fades away. When this happens, usually the one with more social credence “wins” and does not have to concede whatever is at stake.
Now, enters rationality. My friend goes super angry about something. I know the emotion works in such and such way, and as such it blinds him from the irrationality of his behavior, and it’s imorality. Therefore I do not feel so strongly against him anymore, for I understand what he does not, why he does not etc....
Since I understand him, hating him is no longer natural, for I’ve detached my conception of the awful moral act from my conception of the “agent” of the act, I understand that it was not him, but what Dennett calls floating reasons, that have done the job.
This is one of the ways reason can and does destroy emotions.
I’d phrase it as, rationality prevents you from experiencing irrational (i.e., pointless) emotions.
My theory is that almost all negative emotions have to be learned by imitation. They are cached responses copied from role models at an early age, almost always irrational (read: counterproductive), and unfortunately there is no automatic updating system for them.
Even worse is that whenever we experience a cached negative emotion our thinking is impaired (especially by anger), so there is even less chance that we’ll notice and update it. Still worse is that even if we notice the response is irrational and try to update, once the sour taste of the emotion has infected the mind a clean update becomes extraordinarily difficult.
My solution: Make it a habit to imagine awful or offensive situations in advance, and see yourself reacting perfectly.
Like imagine you get stuck in traffic when you’re in a hurry, but you’re totally zen about it. Since it’s your imagination you may as well be 100% chill, heck why not even find some reason to be happy about it? Then that will be the cached response next time you hit traffic.
Or say someone’s kid spills grape juice on your new white carpet (and realistically you’re not going to ask for remuneration). May as well imagine yourself reacting wonderfully, without missing a beat, no hint of irritation whatsoever. This kind of thing really impresses people.
Starts out right; being pointlessly angry at all those crazy drivers is a waste of energy, and often the result of various fallacies (fundamental fallacy? if you drove like that, you’d have a reason to).
But then you mention “why not even find some reason to be happy about it?” That’s a bias; cut it out. Also, you want the kid to realize that spitting juice onto the carpet is unacceptable.
Someone who takes rationality-as-attire (like Roddenberry’s Spock) would avoid strong emotions because they are superficially irrational.
I’m particularly interested in the idea of rational emotion promoted by Objectivism:
From the Ayn Rand Lexicon
Recently, there were rape allegations cast at Julian Assange, founder of Wikileaks. Some people in positions of power saw fit to expose identifying personal information about the accusers to the Internet and therefore, the world at large. This resulted in the accusers receiving numerous death threats and other harassment.
When safety can be destroyed by truth, should it be?
I disagree here with what seems to be an unstated assumption. Namely, that the injunctive “That which can be destroyed by the truth, should be” is intended for application to the world. I instead understand it, as I think many here understand it, as applying to beliefs. If I believe something, it should not be false, and if I think it is false, it is a good thing for me to destroy that belief. Furthermore, in debates over religion, politics, and science, truth is the value that should be pursued. But the idea that I must tell the police about a crime a friend committed because “what can be destroyed by the truth, should be” seems absur, and it is not how I or, I think, many others interpret the phrase.
Also, in the case you gave, safety isn’t being destroyed by the truth, it’s being destroyed by the general public’s reaction to the truth. It is pointless to give threats to anybody for a past action, so this is just another case of an irrational emotional response by the collective.
But this is an interesting case of how the pure practice of rationality can be dangerous in an irrational world: is it truly moral to pursue (and in this case, expose) the truth when you can’t expect everyone else to handle or react to it properly? One possible solution could be that, by practicing rationality and truth-seeking personally, even against the common grain of society, you could be subtly influencing society in a more rational direction. Once enough people do this, it “makes the world safe for rationality” by creating a rational society where irrational emotional reactions to truth are highly discouraged. Since a rational society is more optimized, this maximizes utility in the (very) long run.
In my entire team of Engineers, the one common look I get is a Poker face. It appears that this kind of expression is “default”, and showing emotion is something foreign. I wish people would be OK with just expressing themselves—how they feel about any particular situation.
Then, the truth would quite often be out on the table, and everybody can then deal with it [along with the consequences!]
.
I think that the saying “What can be destroyed by truth, should be” is a little bit too black and white to work well in all aspects of life. For example, a clumsy and fat person who thinks he is actually rather agile, might be a lot happier with this false belief than if he were aware of the truth*. Of course it could be said that if he knew the truth, he would start to exercise and eventually become healthier, but that’s not necessarily the case. Another example would be, that if a not-so-good-looking person thinks he looks good, he might be encouraged by that false belief to ask someone he likes for a date.
*Here when I talk about truth, I mean that how things are in the physical reality. ( whatever that may mean. )
I may be somewhat more radical than a lot of people here, but I don’t think the fat man should be deluded. It will hurt him more in the long run, because, believing himself to be agile, he’ll sign up for physically strenuous jobs and may injure himself, or try to compete in sports and be let down hard, instead of lightly like a controlled reveal could be.
Having read Feeling Good, I have a different view on emotions than those posed thus far in the comments.
Anger might be a valid response to the little goblin tying your shoes together, but the rational person asks, “Does it benefit me or hurt me to feel anger?” Anger is generally a maladaptive response in today’s environment of tremendous punishments for physical violence, and that’s beside the fact that it is an extremely unpleasant feeling.
Instrumental rationality, remember? If it prevents you from fulfilling your goals to feel x, then x is unwarranted.
In Eliezer’s case, “It benefits me to feel sad because my brother died,” is uncertain. Maybe it motivates him to work really hard at creating Friendly AI and is thus warranted, but the impression I get is that he was already doing that.
I almost hope he doesn’t see this comment, but I’d like to see his response. I have a vague feeling of something crucial I overlooked.
Edit: It seems Amanojack expressed this sentiment earlier, and I didn’t really need to post this. Oops.
It does benefit you to feel sad because your brother died, though not exactly directly. The reason you feel sad is because you were attached to him. You would not feel sad if he were a random, namless (to you) stranger. Having that attachment is beneficial, even if the consequent emotion is not. But the two are inextricably tied together, and the prospect of sadness at the loss is part of what keeps you wanting to look after each other.
The question of rationality in emotions is better considered in the framework of Rational Emotive Behavior Therapy. An emtion is irrational if it results from an irrational belief, one that is dogmatic, rigid, inflexible. When you recognize this and replace the irrational belief with the rational one, the irrational emotion tends to be replaced by a rational one.
In the example of the goblin, the anger is not a direct result of the goblin tying your shoes together, but of your beliefs about the goblin tying your shoes together. Common anger-inciting beliefs are “he shouldn’t have done the” or “I can’t stand that he did that.” But why shouldn’t he have done that? Is there some law stating goblins can’t tie shoes together that was violated? Can you not stand that? Will you expire on the spot if that happens? No, what you really ought to realize is that “it’s unfortunate and inconvenient that the goblin tied my shoes together.” And when you think that thought, the anger typically turns into mild irritation or disappointment.
In the case of losing a brother, being sad and mourning is a normal, natural, and healthy response. If you went around thinking “I can’t live without him” or “I can’t stand that he died” you’re going to upset yourself irrationally and likely end up unduly depressed. If you replace those thoughts with “It’s very sad that my brother died, but I can tolerate it and life will still go on” you will likely be sad and mournful, but then move on with your life as most people do when they lose loved ones.
Emotions can result in conclusions that do not arise rationally. You don’t CHOOSE to be angry, and this anger can make your decision for you.
We are also very well acquainted with hindsight. We can look back on a situation that resolved itself in a way we would have avoided, if only we hadn’t been so emotional. I really feel that the emotionless state is the default.
Speak for yourself!
To a point, you do choose to let yourself be angry or not. The same thing that will make you angry in general, when you know you can’t afford to be angry (like you’re in a job interview or a first promising date) you’ll not let yourself be angry.
It’s not always easy, but you can train yourself to control your anger better, and everyone does have a limited ability to choose when to be angry and when not.
To some extent this is true. Strong emotions do have the power to shut down activity in the executive centers of the brain. There’s a physiological basis for the idea of “seeing red” when you’re angry. However, you can also train yourself to stop your emotional reactions in their tracks, think about them, and change them. You can choose not to be angry, but you likely need education and training to do so, and you may not be successful 100% of the time. But you can certainly improve dramatically.
Interesting post. I think something like that happened to me—I was only glad when I was right, or at least thought I was right, but… Doesn’t rationality in general diminish sadness over non-acute things? Sure, wars are awful no matter how rational or irrational you are, but… For example, dealing with the fact that The Universe Doesn’t Care seems very troublesome for a lot of my peers, to the point where they push it away, same with genetically-determined intelligence.
Same with, as I’ve noticed, a seeming lack of empathy towards people. Not sure how to deal with that, as I want to be right, and correct others, even when they don’t like it. Ah, the dilemmas… And I can’t think of a third alternative either.
“our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy”
Fallacy of the single cause. Knowledge of the physical fact of his being alive does not completely determine your response of being happy, many other things come into this, of which at least a few are non-rational. Maybe your brother is a convicted serial killer who recently escaped from detention, killed a few more people according to his old habits and is now reported to be alive only by virtue of having escaped a police hunt through the nearby forest (with officers ordered to shoot on sight). Yet still you may be happy to hear he’s alive (and here comes the usual explanation people give, the real explanation) “because he’s your brother”. This is considered to be the most important factor in your being happy in this case—“because he’s your brother”—and it encodes some non-rational baggage together with some arguably rational things (like an evolutionary preference to support the survival of your kin’s genes etc.).
The fact is that any human preference results from multiple causes and at least one of those will always be non-rational (which is to say I don’t know of even one single example where this was not the case) and will open said preference to being labelled “non-rational”.
Reason is just a tool. Before you decide what to use the tool for, you have to have non-rational preferences about which things to even try to do. For example, first you have the non-rational desire to predict the future behaviour of physical systems with high accuracy and only afterwards do you employ rational methods to achieve that (which leads to science). The only rational part is what you’re doing after you’ve established your fundamental goal. The fundamental goal itself can’t be rational insofar as it can’t be derived logically from any antecedents. Even if your desire to predict things was based on your desire to survive and even if the desire of the individual to survive could be justified on the basis of the evolutionary goal for the species to survive, you still end up at a point where you can no longer offer any justifications. Why should your species survive and not others? Maybe you think your species has the highest capacity of ensuring the survival of life-in-general in the universe for the longest time? But even then, why should life-in-general survive? Just because. Non-rationally. :)
Good catch; reason cannot determine our end goals. Eliezer covers that a just a few essays down the road.
This was actually a personal statement, not a general hypothetical; his brother died three years before he wrote that essay, and wasn’t a serial killer. But Eliezer would agree that death is just plain bad; that’s a terminal value that doesn’t have—or need—rational justification.
I was talking to someone the other day about our treatment of sexual offenders. She seemed to be insinuating that I didn’t care about the plight of the victims because by proposed solutions were all aimed at reducing sexual violence rather than punishing the offenders.
I told her that the injustices visited upon the victims of sexual abuse made me very angry, which made me passionate about fixing the problem. Having set my goal of reducing sexual violence, it behooved me not to let my anger at the perpetrator distract me from the task of achieving that goal. If I’m ever presented with a choice where I can either punish the perpetrator or help the victim (or future potential victims), I chose the latter. You can’t always do both at the same time.
So I suppose emotions can be rational in that they can arise from truth, but they can also be very irrational in that they prevent winning your goals.
An interesting perspective on the validity of emotional states vis-à-vis Rationality.
I have something of a fear of heights. This fear is, I realize, irrational. Certainly, Being afraid of falling and the resultant injury or death is reasonable and potentially useful. However, fear when it is completely unfounded…
I remember a spring break some years back, where I learned to ski and enjoyed it very much indeed. I was, however, held back by my visceral reaction, whenever approaching a portion of the trail where I could not see my path of travel, part of my brain was absolutely convinced that I would find myself plunging into the yawning crevice which awaited me. This was irrational as I well knew that were the designers of ski resorts prone to leaving yawning crevasses laying about they would find a definite limitation in their repeat business.
“Nothing doing,” said my brain, “there is the veritable Grand Canyon just beyond that hillock, and I am locking the legs in ‘Snow Plow’ position until I see different!”
Related Einstein Quote: “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”
what about applying rationality to the emotional situations themselves? when your family member dies by virtue of someone elses mistake/accident, does rationality require (in its purest sense) that we evaluate the situation without the emotions that a family member often feels? if not, what if a third party “rationally” evaluates the situation differently? (e.g. “your family member was equally at fault”) . Can two different viewpoints about the same event be rational, taking into account each decision maker’s relative emotions (or lack thereof)?
Rationality doesn’t require that you not feel the emotions, it just requires that you avoid letting them bias you towards one conclusion over another. You should follow the evidence to determine the level of guilt of the perpetrator. There is no causal link from how you feel about the event to how it actually happened. I’d have to say that in terms of interpreting the event, there is no room to “agree to disagree” if all the facts are understood and agreed upon. Certainly there’s room to feel differently about it based on your own relative situation, but it has no bearing on the interpretation of the event.
The way I see it, emotions and reason serve two complimentary functions. Emotions tell you your goal, what you want. Reason tells you how to get there. Your emotions may say “Go south!” and reason may say “There is an obstacle in my path. In order to reach my destination, I need to first make a detour.” If you allow your emotions to override your sense of reason, you’ll try to go south and plow straight into the wall, and that lack of reason will hinder your ability to achieve your desired ends. If you think that reason is the way and emotions are the enemy and thus undervalue your emotions, you’ll wander around aimlessly, as you’ll have no sense of where it is you actually want to go. If one were truly Spock-like, and bad events failed to result in negative affect, there would be no reason to think of them as bad and thus no logical reason to avoid them. (Here you could argue that, well, if it affects other people negatively, that would be reason. But in that case you’re assuming empathy—that when bad things happen to others, it makes you feel bad, which is an emotional response.)
OK.
So how would you describe those decisions that are made based on the emotion? Are the irrational? Are they unreasonable? How would the fact that you cannot get the relevant evidence play into the analysis of my judgement that is formed at least partially based on emotion? Is the rational point of view in such case just “i dont know”?
This is not meant to disagree with your point, but I want to push to see how far your analysis holds.
Well that really depends what the decision is and what the circumstantial factors are. As I said in my last comment, decisions are made by a combination of emotion and reason. Emotions tell you where you want to go, and reason tells you how to get there. Whether or not a decision is reasonable depends on (1) was it an effective (and efficient, though that’s somewhat less important) way of achieving your goal? Did it actually produce the outcome desired by your emotions? And (2) was it consistint with reality and the facts? Was the decision based on accurate information?
Taking the example you gave, of a family member being hurt by someone else in an accident, your emotions in reaction to this event are likely to be very charged. You just lost someone that was important to you, and you’re bound to feel hurt. It’s also very common to feel angry and to want revenge on (or justice for) the person that was responsible. It’s not clear to me why the human default is to assign guilt without evaluting the situation first to see whether or not the person actually is guilty, but that does seem to be the common response. In this case, it would be up to a jury to decide whether this constituted manslaughter. It’s most probable that the jury, having no vested interests besides ensuring justice, would be able to come to the most rational conclusion.
That said, if you are being truly rational about it and if your emotions are telling you your goal is to find out who (if anyone) was responsible, then your conclusion should be no different than that of the jury’s. Of course, most people do allow their emotions to bias them, and aren’t rational (thus the need for the jury). But if you are being rational about it, and your goal truly is about discovering the guilt or innocence of the parties involved, then how you feel about the situation is what is motivating your search, and reason and evidence should be what determined your answer. If you really don’t have enough evidence, and the evidence you do have doesn’t point more in one direction than the other, then yes, the rational conclusion would be simply to admit that you don’t know.
One should be careful to inspect what exactly that emotional motivation actually is, if it’s to determine guilt or innocence, to learn the truth about the situation, and not to find someone to blame so that you can feel better about it. (Although, how it would make you feel better to condemn a potentially innocent person when it will do nothing to bring back your family member nor help anyone else is a mystery to me. Alas, human beings have a lot of nonsensical intuitions.)
That said, if you’re honest about your intentions, and what you really want is to blame someone else, and not to find the truth, and the possibility of blaming someone innocent isn’t inconsistent with other explicit or implicit pro-social goals of yours, then to point the finger without basing your conclusion on the examination of the evidence isn’t strictly irrational, since it would be consistent with your goals, to which the facts aren’t relevant. However, that sort of approach would be pretty anti-social, and I doubt anyone having that goal would be honest enough to admit it. If your stated goal is to find the truth, then the only honest thing to do is look at the evidence, follow it, and be prepared that it might go either way.
It does no good to write in the bottom line before you start if your goal is to find out the truth. You won’t arrive at the truth that way, and if your emotions tell you the truth is what you want, then that behavior would be irrational. In the words of Eliezer Yudkowsky, “Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.” I strongly recommend you read Eliezer’s posts The Bottom Line as well as Rationalization, as they address the issue you seem to be struggling with.
A somewhat related, incredibly badass quote.
-Frederick Douglass, black Abolitionist leader, in What to the Slave is the Fourth of July?
(Related, Richard Rorty on a pragmatist/postmodernist approach to human rights, solidarity and empathy.)
(P.S. Considered posting this in Rationality Quotes first… but I hoped that the context of EY’s essay might help the quote look less provocative/trollish for LW.)
This is a fantastic quote set of quotes. I think it is necessary to attach a disclaimer, though. As he points out, there are definitely circumstances when the right and proper response is to ridicule hypocrisy and reign down scathing critique on those who uphold things they know to be unjust. However, such circumstances can be defined fairly narrowly, and don’t apply to most cases. This isn’t a catch-all license not to have to debate an argument, because that would require a similarly well-justified reason.
When there is a consensus that something is morally wrong and has a better alternative, but some who benefit from the practice put up a flimsy defensive argument that most people see straight through, then the thing to do is rouse the populace against what they already know to be unacceptable. But if the point itself is hotly debated with many people on both sides of the thing itself (not just arguing that it is a necessary evil, but arguing against any “better” alternative) then one should be wary. After all, political debates should not appear one-sided. These types of political arguments seem to be the most common.
The thing to do in the case of most circumstances is to find where the truth lies. Not just to pick a side, but to objectively examine all arguments, and weigh the pros and the cons as they are found, and update beliefs to match reality. Only once the truth has been found with a high degree of certainty should things have shifted from a purely intellectual investigation into all-out advocacy. The academic approach approach should slowly transition from discussion into lobbying as evidence builds. At the far end of the spectrum, to be reached only once one has an extremely high degree of certainty in one’s arguments, is “a fiery stream of biting ridicule, blasting reproach, withering sarcasm, and stern rebuke”. I don’t think it’s possible to reach sufficient certainty in one’s own opinions without the pier-review of an entire population for a while, perhaps an entire generation. If, after a generation of debate, a super-majority are sympathetic to the cause but are unwilling to fight the entrenched powers that be (or to give up their own comforts, or to make the massive changes needed to correct the problem, or whatever has prevented resolution so far) THEN it is time to abandon traditional discourse and the usual debate. That is what’s needed, under those circumstances, to actually motivate the populace to do what they already know to be the right thing.
Note that Frederick Douglass was speaking about issues which had been widely debated for almost a century, with one side claiming “necessary evil”, and the other showing by example that it wasn’t, in fact, necessary at all. If one’s own cause doesn’t compare, then perhaps more careful thought or more discussion is what is called for. That’s not to say that the issue appeared clear cut at the time, (it definitely did not), but only that only relatively small logical steps (in an objective sense ) to arrive at a high-certainty conclusion. Such quotes aren’t justification for anyone to do the same with their own pet issue, at least not without very careful consideration of whether it is really what is needed most.
You’re basically saying that we shouldn’t reject rational discussion unless our cause is really, really, proven. And everyone thinks their cause is really, really, proven. It doesn’t matter whether you phrase it as “high degree of certainty” or “showing by example that it wasn’t necessary” or “only has a flimsy defensive argument” or even “there is a moral consensus”; everyone’s pet cause falls into that category, as far as they are concerned.
Just like you need to give criminal suspects trials even if you think they are guilty, you need to treat ideas rationally even if you think their supporters don’t have a case.
If we had an infinite amount of time to spend treating all cases equally, then I would agree that all opinions should be argued out rather than ignored. Unfortunately, we only have limited time, and have to allocate it where we think it will do the greatest good. I think that a perfect rationalist would encounter ideas that aren’t worth the time to debate fully with their proponents. Unfortunately, we aren’t perfect rationalists, but thankfully we know that we aren’t perfect rationalists, and can try to compensate for our inadequacies. One such inadequacy is that we generally drastically overestimate how likely we are to be correct. In extreme cases, we even assign 100% certainty to things. The previous 2 sequences explain why this is a very bad idea. I think this is the sort of thing you were pointing out, and I would agree with you on that.
Even so, if I am extremely certain of something, and have good reason to believe that I’m not missing some subtle point, (such as the topic having been previously debated to death for the previous century), and if I apply a correction factor to compensate for the tendency I know I have to overestimate probabilities… if I do all this, and the probability of being correct still turns out to be quite high, with a narrow standard deviation, then I would indeed be inclined to waste little to no time on further discussion, and instead devote all my energies to fixing the problem by any means necessary.
Further, I suspect that you do the same to some degree. What issues do you spend more time arguing oven than solving? (maybe most political issues) What issues do you spend more time solving than arguing over? (perhaps you and your spouse spend more time actually doing housework then discussing the details of how to optimally divide labor) What issues do you spend 99% of your efforts fixing, rather than discussing the best fix? Aren’t there some issues where you sometimes refuse to “feed the trolls”, thus rejecting an opportunity to debate a topic that you are extremely sure of? Or do you make a policy of always replying to all such bait?
I’m not saying it would be a great heuristic to follow, especially for most people. I’m saying, that in an extremely narrow scope, it holds true. If you take the limit as p(correct) goes to infinity (Well, infinity if you are using decibels (∞db) or odds (∞:1), but 1 if you are using fractions (1/1 chance) and 100 if you are using percentages (100%). But I think decibels illustrates the point nicely.) eventually you have to start acting quite similar to how you would if you were 100% certain. That’s why Cromwell’s Rule exists; to protect us from rashly assigning such ludicrously high probabilities to anything. That overconfidence is the real problem.
Yes, it does. I would, for instance, put creationism in that category.
But I suspect the advice would be bad for most people, at least most people of the kind you see in Internet arguments, because people have a habit of saying that all sorts of things are really well established to all reasonable people and are only opposed by the deluded and by those with a stake in the problem. I wouldn’t, for instance, put capital punishment, or vegetarianism, or effective altruism, or immigration, in that category, but I’ve seen people treat all of those that way.
What is it with you and shoelaces?
Emotions, like any sensory input, can serve as a source of information to be rationally inspected and used to form beliefs about the external world. It is only when emotions interfere with the process of interpreting information that they become detrimental to rationality.
I don’t think this covers the role of emotions in regard to instrumental rationality. The most negative consequences of emotions are how they control our behavior inspite of knowing the rational thing to do based on rational beliefs. I think there is far more merit to the idea of being able to completely control your emotions than you credit. Using emotions as a driving factor for motivation is counterproductive in many situations because you are not in control of how and how much they affect you.
I think you’ve wrongly interpreted being in control of your emotions to being emotionless.
“Since the days of Socrates at least, and probably long before, the way to appear cultured and sophisticated has been to never let anyone see you care strongly about anything.”
I would strongly encourage anyone who wants a good counterexample to read Plato’s Symposium, where the desire for wisdom is specifically linked with erotic desire.
I’m sleep deprived right now, things are getting ‘weirder’ lol. I ‘should ‘do this later… I seldom write in first person, but it seems easier right now. So I don’ t think these comments exactly typical of mine. I hope there is some substance .
I try to keep two mental compartments, one where I do rational processing, the other where emotions occur. One is a noisy mess. From the other, rational thoughts flow. I struggle to ‘rational-check’ them, and form them into coherent sentences. I organize them later into subtopics since they jump between those uncontrollably. Inside the rational box there might be a sub-box I call the ‘executive’ area, that tries to make rational decisions about actions, instead of just streaming rational thoughts. There is also an executive area in the emotional box. But now after experience, I try to manage that box with the rational executive box. Actions need ‘higher clearance’. So I don’t consider myself especially ‘spontaneous’ lol.
Of emotions, I deem them all irrational in the most ‘pure’ sense of the term rational. That’s an important basic distinction for rational thought. ‘Thinking emotionally’, if you know what I mean, can have negative consequences. We have different ways of describing emotions as rational or not, so I clarify mine. Emotions can have pragmatic utility, such a fear, but that isn’t the same as my definition of rational. I know different people can have different, or even opposite emotions when placed in exactly the same situation. That also depends on the type of situation though. That inconsistency is one of the other factors I consider justification for classifying emotions as irrational.
There’s a human context in which i deem caring (that kind of love) about others to be universally rational, and not caring universally irrational. It’s deemed rational by pure rational thought itself. I mean that in the extended, justified, critical thinking sense of pure rational thought. Part of the justification is, that the emotion can potentially enhance a motivation to contribute to the primary purpose of life. That purpose, derived rationally, is to be healthy, as an individual and as a group. The term healthy has many implications, including being moral itself. So caring is an exception among emotions, it has a clear, consistent rational justification. The context is one I might call ‘objective morality’, a much longer discussion. So It’s by rational choice I endeavor to ‘actively’ care, and empathize, more than I might ‘passively’. Likewise self-caring is deemed rational and moral.
I can’t exactly choose to quickly change an emotion to something other than what I’m feeling at a given moment, but I can quickly ‘disempower’ it sometimes, after my rational side reacts and judges ‘appropriateness’. Over time, maybe this trains me to feel a bit more ‘appropriately’ to begin with, maybe. This is more true of transient situations than of my general emotional view of my overall life situation. But I work on that too, the same way.
I think of this all as ‘rationally managing emotions’. I think it also helps me have more ‘self awareness’ psychologically.
I’m sleep deprived right now, things are getting ‘weirder’ lol. I ‘should ‘do this later… I seldom write in first person, but it seems easier right now. So I don’ t think these comments exactly typical of mine. I hope there is some substance .
I try to keep two mental compartments, one where I do rational processing, the other where emotions occur. One is a noisy mess. From the other, rational thoughts flow. I struggle to ‘rational-check’ them, and form them into coherent sentences. I organize them later into subtopics they jump between uncontrollably. Inside the rational box there might be a sub-box I call the ‘executive’ area, that tries to make rational decisions about actions, instead of just streaming rational thoughts. There is also an executive area in the emotional box. But now after experience, I try to manage that box with the rational executive box. Actions need ‘higher clearance’. So I don’t consider myself especially ‘spontaneous’ lol.
Of emotions, I deem them all irrational in the most ‘pure’ sense of the term rational. That’s an important basic distinction for rational thought. ‘Thinking emotionally’, if you know what I mean, can have negative consequences. We have different ways of describing emotions as rational or not, so I clarify mine. Emotions can have pragmatic utility, such a fear, but that isn’t the same as my definition of rational. I know different people can have different, or even opposite emotions when placed in exactly the same situation. That also depends on the type of situation though. That inconsistency is one of the other factors I consider justification for classifying emotions as irrational.
There’s a human context in which i deem caring (that kind of love) about others to be universally rational, and not caring universally irrational. It’s deemed rational by pure rational thought itself. I mean that in the extended, justified, critical thinking sense of pure rational thought. Part of the justification is, that the emotion can potentially enhance a motivation to contribute to the primary purpose of life. That purpose, derived rationally, is to be healthy, as an individual and as a group. The term healthy has many implications, including being moral itself. So caring is an exception among emotions, it has a clear, consistent rational justification. The context is one I might call ‘objective morality’, a much longer discussion. So It’s by rational choice I endeavor to ‘actively’ care, and empathize, more than I might ‘passively’. Likewise self-caring is deemed rational and moral.
I can’t exactly choose to quickly change an emotion to something other than what I’m feeling at a given moment, but I can quickly ‘disempower’ it sometimes, after my rational side reacts and judges ‘appropriateness’. Over time, maybe this trains me to feel a bit more ‘appropriately’ to begin with, maybe. This is more true of transient situations than of my general emotional view of my overall life situation. But I work on that too, the same way.
I think of this all as ‘rationally managing emotions’. I think it also helps me have more ‘self awareness’ psychologically.
something terrible happens link is broken. Was moved to http://yudkowsky.net/other/yehuda/
Also fixed!
How does one go about this?
I have begun reading everything I can find by you on this page—I will probably also read other things, but it seems a foundation by (one of) the founder would be useful.
Still, while I see the ideas presented as very useful, I find myself wondering how do actually go about implementing them. Take any one thing as an example here, such as “Making Beliefs Pay Rent”. (I hope you are not annoyed by this Outside The Box-Box^^)
One way of doing this seems to be to simply read or think about this over and over until I have the thought ingrained into my minds commonly used pathways, so as to enable me to have more opportunities to actually work on my beliefs/implement these ideas into my day.
This seems inefficient, even though I don’t know if it simply is inefficient to start doing something like that.
Another way would probably be do sit down somewhere and try to let your beliefs flow through you while watching for inconsistencies.
This however appears to me to be unlikely to actually work / in my experience I start to either drift and/or probably) miss most things.
So, how does one study the skills of rationality and train oneself (not to deny facts/X/Y/..)?
If I missed something obvious and this annoys you, I hope I get an answer before you delete this and want to know that I would be feeling sad about having annoyed you without offering you my submission.
I guess it all depends on what you want to achieve. Are you unhappy with your life? Do you want to change something specific? What keeps your from changing it?
I guess all these questions can be answered rational by searching your feelings and looking for facts. My two cents. (I’m also trying to understand all this. :)
That invites a rather optimistic view of mind. If we have a mind deprived of emotions but similar to us in other we expect that it will on average fare better than ours. Not because emotion is somehow _underlyingly_ irrational but because it tends to intensify our biases (and be the main motivation for some of them—affective death spirals come to mind first).
You could respond that curiosity and having something to protect are both based on emotions—but that’s human motivation for rationality not guarantee of its efficiency, and both, unless supported by a good model, can also be fulfilled by religion. Truth as an instrument could be sufficient for emotionless brain as well.
Reminded me of this blog post by Nicky Case, where they said “Trust, but verify”. Emotions are often a good heuristic for truth: if we didn’t feel pain, that would be bad.
I think this article wonderfully illustrates the primary relationship between emotions and epistemic rationality. Namely, that emotions can be downstream of false beliefs. Robin Hanson added in another comment that this relationship can go the other direction, when strong emotions bias us in ways that make us less epistemically rational.
But I think there is also a separate relationship between emotions and instrumental rationality. Namely, that emotions can influence which decisions you make. This includes but is not limited to epistemic bias.
Buddhist, lobotomy patients and phlegmatic people all have things in their closets, they all have things to get angry, upset or confused about. If you are a Buddhist, lobotomy patient and phlegmatic you still see the particular narrative worldview. What you see does not change because after all there will always be something to get tied up on. It is about shaping the things that you do get tied up on and further controlling your reaction to them.
The goblin is the writer
The closet is your map and territory of your mind
The shoelaces tie together connections in your mind
The shoes are the words
If you read an article here that makes you angry it is okay. The writer has simply made a connection in your mind that you believe imposes upon you a negative feeling. You feel angry because of the map and territory of your mind, not what is written.
Perception of the writer as the globin disappears when you can change the shape of the map or territory.
Who is a globin to steal your gold, but a leprechaun at the end of the rainbow.
I noticed that something is conspicuously missing from this article. Namely, that truth can have disutility as well as utility. There are instances where it is better to not know than to know. For instance, if nazis come to your house looking for Anne Frank, it’s better that they don’t know she is in your attic. It can also be better that someone doesn’t know you don’t like their gift.
Then there are times where the truth can be a hindrance. For example, when I look at the desktop on my computer screen and drag a file to the trash, I am not throwing anything away. I am really manipulating voltages inside the computer, but knowing exactly how I am manipulating those voltages would distract so greatly from the actual task that if I insisted on knowing it before executing the task, it would take forever to get done. I don’t need to know all that to delete a file on my computer because pretending that I am dragging a file to the trash on my screen works better than trying to manually manipulate the voltages on the reductionist level, so that information has disutility to me. I can think of plenty of other scenarios as well such as a cheating wife who otherwise remained faithful besides the one time. Truth could ruin the otherwise happy marriage forever.
What about beneficial mind hacks where one fools themself internally to accomplish some feat that they might otherwise be incapable of accomplishing such as thinking “I can do anything if I really put my mind to it”? Knowing something can even get you killed such as if you learn that the mafia is bribing the mayor or chief of police in your city. Assuming that truth always has utility just isn’t accurate. In most situations truth is useful, but that is just a heuristic, not a universal. Knowing when to use it and when to be able to let go of it is also important. I like to view truth as a useful tool. No matter how useful it may be, it still isn’t the right tool for every single job. This itself is a truth.