Hello, I’m a 21 year old undergraduate student studying Economics and a bit of math on the side. I found LessWrong through HPMOR, and recently started working on the sequences. I’ve always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant as I weigh options moving forward out of Undergrad (Peace Corp? Developmental Economics?)… I’m fond of ellipses, Science Fiction novels and board games—I’ll keep my interests to a minimum here, but I’ve noticed there are meetups regularly; I’m currently studying abroad in Europe, but I live close to Washington DC and would enjoy meeting members of the community face to face at some point in the future!
Edit: If anyone reads this, could you either direct me to a conversation that addresses the question “How has LW / rational thinking influenced your day to day life, if at all,” or respond to me directly here (or via PM) if you’re comfortable with that! Thanks!
I’ve always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant
A popular belief about “rationality” is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.
So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: “That which can be destroyed by the truth should be.” My dreaming self’s happiness was opposed by truth. My sadness on waking is rational; there is no truth which destroys it.
and
To be sure, emotions often ruin our attempts at rational thought and decision-making. When we’re anxious, we overestimate risks. When we feel vulnerable, we’re more likely to believe superstitions and conspiracy theories. But that doesn’t mean a rational person should try to destroy all their emotions. Emotions are what create many of our goals, and they can sometimes help us to achieve our goals, too. If you want to go for a run and burn some fat, and you know that listening to high-energy music puts you in an excited emotional state that makes you more likely to go for a run, then the rational thing to do is put on some high-energy music.
Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals. You’re allowed to be emotionally suckered, eh, influenced into doing your best (instrumental rationality) to do good in the world (for your values of ‘good’)!
Thank you for the reading suggestions! Perhaps my mind has already packaged Spock / lack of emotion into my understanding of the concept of ‘Rationality.’
To respond directly -
Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals.
Though if pure emotion / altruism sets my goals, the possibility of irrational / insignificant goals remains, no? If for example, I only follow pure emotion’s path to… say… becoming an advocate for a community through politics, there is no ‘check’ on the rationality of pursuing a political career to achieve the most good (which again, is a goal that requires rational analysis)?
In HPMoR, characters are accused of being ‘ambitious with no ambition’ - setting my goals with empathetic desire for altruism would seem to put me in this camp.
Perhaps my goal, as I work my way through the sequences and the site, is to approach rationality as a tool / learning process of its own, and see how I can apply it to my life as I go. Halfway through typing this response, I found this quote from the Twelve Virtues of Rationality:
How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception...Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.
There is no “correct” way whatsoever in setting your terminal values, your “ultimate goals” (other agents may prefer you to pursue values similar to their own, whatever those may be). Your ultimate goals can include anything from “maximize the number of paperclips” to “paint everything blue” to “always keep in a state of being nourished (for the sake of itself!)” or “always keep in a state of emotional fulfillment through short-term altruistic deeds”.
Based on those ultimate goals, you define other, derivative goals, such as “I want to buy blue paint” as an intermediate goal towards “so I can paint everything blue”. Those “stepping stones” can be irrational / insignificant (in relation to pursuing your terminal values), i.e. you can be “wrong” about them. Maybe you shouldn’t buy blue paint, but rather produce it yourself. Or rather invest in nanotechnology to paint everything blue using nanomagic.
Only you can (or can’t, humans are notoriously bad at accurately providing their actual utility functions) try to elucidate what your ultimate goals are, but having decided on them, they are supra-rational / beyond rational / ‘rational not applicable’ by definition.
There is no fault in choosing “I want to live a life that maximizes fuzzy feelings through charitable acts” over “I’m dedicating my life to decreasing the Gini index, whatever the personal cost to myself.”
Hello, I’m a 21 year old undergraduate student studying Economics and a bit of math on the side. I found LessWrong through HPMOR, and recently started working on the sequences. I’ve always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant as I weigh options moving forward out of Undergrad (Peace Corp? Developmental Economics?)… I’m fond of ellipses, Science Fiction novels and board games—I’ll keep my interests to a minimum here, but I’ve noticed there are meetups regularly; I’m currently studying abroad in Europe, but I live close to Washington DC and would enjoy meeting members of the community face to face at some point in the future!
Edit: If anyone reads this, could you either direct me to a conversation that addresses the question “How has LW / rational thinking influenced your day to day life, if at all,” or respond to me directly here (or via PM) if you’re comfortable with that! Thanks!
Those are not at all at odds. Read e.g. Why Spock is Not Rational, or Feeling Rational.
Relevant excerpts from both:
and
Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals. You’re allowed to be emotionally suckered, eh, influenced into doing your best (instrumental rationality) to do good in the world (for your values of ‘good’)!
Thank you for the reading suggestions! Perhaps my mind has already packaged Spock / lack of emotion into my understanding of the concept of ‘Rationality.’
To respond directly -
Though if pure emotion / altruism sets my goals, the possibility of irrational / insignificant goals remains, no? If for example, I only follow pure emotion’s path to… say… becoming an advocate for a community through politics, there is no ‘check’ on the rationality of pursuing a political career to achieve the most good (which again, is a goal that requires rational analysis)?
In HPMoR, characters are accused of being ‘ambitious with no ambition’ - setting my goals with empathetic desire for altruism would seem to put me in this camp.
Perhaps my goal, as I work my way through the sequences and the site, is to approach rationality as a tool / learning process of its own, and see how I can apply it to my life as I go. Halfway through typing this response, I found this quote from the Twelve Virtues of Rationality:
There is no “correct” way whatsoever in setting your terminal values, your “ultimate goals” (other agents may prefer you to pursue values similar to their own, whatever those may be). Your ultimate goals can include anything from “maximize the number of paperclips” to “paint everything blue” to “always keep in a state of being nourished (for the sake of itself!)” or “always keep in a state of emotional fulfillment through short-term altruistic deeds”.
Based on those ultimate goals, you define other, derivative goals, such as “I want to buy blue paint” as an intermediate goal towards “so I can paint everything blue”. Those “stepping stones” can be irrational / insignificant (in relation to pursuing your terminal values), i.e. you can be “wrong” about them. Maybe you shouldn’t buy blue paint, but rather produce it yourself. Or rather invest in nanotechnology to paint everything blue using nanomagic.
Only you can (or can’t, humans are notoriously bad at accurately providing their actual utility functions) try to elucidate what your ultimate goals are, but having decided on them, they are supra-rational / beyond rational / ‘rational not applicable’ by definition.
There is no fault in choosing “I want to live a life that maximizes fuzzy feelings through charitable acts” over “I’m dedicating my life to decreasing the Gini index, whatever the personal cost to myself.”