I doubt it. In my experience, the average person is quite stupid.
Okay, yeah, I should have added the word some. Kaczynski is the only psychopath I’ve really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths, even though we probably never hear about 99% of them. That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?
You may need a lot less money to retire than you’d think.
Believe me, I know. Even without trying to save money, I actually end up spending less on myself (excluding having paid for college) than on charity. Free hobbies are great. I didn’t mean a pension was a reason to become a detective; it would just be a nice perk. Thanks for the link, though. Lots of good articles on that site!
Most people use the term intelligence to refer to things like aptitude, working memory size and ability to remember things. I think that those things are overrated and that the ability to break things down like a reductionist is underrated.
Well, I’m biased in favor of this idea, since I have an awful memory, but a pretty good ability (sometimes too good for my own good) to break things down like a reductionist and dissolve topics. I’ll check out your post tomorrow and try to give some feedback.
even though I think I’m an amazing writer :)
I think so too!
I actually don’t even think there’s that much to say.
Nope, there’s really not, but another thing I’ve realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea. If you skillfully parse through an idea, the audience will probably understand it at the time. But if you want the idea to actually sink in and stick with them, great examples are key. This is one reason I like Scott’s posts so much; they actually affect my life. Personally, I was borderline cocky when I was younger (but followed social norms and concealed it). Then, I got older and started to read more and more, moved to the Bay Area, and met loads of smart people. Because of this, my self-esteem began to plummet, but I read that article just in time to stabilize it at a healthy, realistic level.
Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?
people fall victim to scope insensitivity
Yeah, this is sooo real. On a logical level, it’s easy to recognize my scope insensitivity. On a “feeling” level, I still don’t feel like I have to go out and do something about it. But I don’t want to admit my preference ratios are that far out of whack; I don’t want to be that selfish. Ugh. Now I feel like I should do something ambitious again, I’m so waffley about this. Thanks for all the help thinking through everything. This is BY FAR the best guidance anyone has ever given me in my life.
I’m confused. If you assume that dying is bad, you have a lot to lose (proportional to the badness of dying). Are you considering death to be a neutral event?
No… sorry, I was just working through my first thoughts about the idea, not making a meaningful point. Continuing on the selfishness idea, all I meant was that the researchers themselves would surely die eventually without AI, so even if AI made the world end a few years earlier for them, they personally have nothing to lose relative to what they could gain (dying a few years earlier vs. living forever). My first thought was “that’s selfish, in a bad way, since they care less than the bajillions of still unborn people would about whether humans go extinct” but then I extrapolated the idea that the researcher would die without AI to the idea that humanity would eventually go extinct without AI and decided it was selfish in a good way.
Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I’ll go back and find exactly where it was.
Kaczynski is the only psychopath I’ve really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths
I don’t know too much about him other than the basics (“he argued that his bombings were extreme but necessary to attract attention to the erosion of human freedom necessitated by modern technologies requiring large-scale organization”).
I think that his concerns are valid, but I don’t see how the bombings help him achieve the goal of bumping humanity off that path. Perhaps he knew he’d get caught and his manifesto would get attention, but a) there’s still a better way to achieve his goals, and b) he should have realized that people have a strong bias against serial killers.
The reason I think his concerns are valid is because capitalism tries to optimize for wanting, which is sometimes quite different from liking. And anecdotally, this seems to be a big problem.
That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?
I’m not sure what the bias is called :/. I know it exists and there’s a formal name though. I know because I remember someone calling me out on it LWSH :)
Nope, there’s really not, but another thing I’ve realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea.
Yes, I very much agree. At times I think the articles on LW fail to do this. Humans need to have their System 1′s massaged in order to understand things intuitively.
Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?
Idk. This seems to be a question involving terminal goals. Ie. if you’re asking whether our innate conscientiousness makes us “good” or “bad”.
When I think of morality this is the/one question I think of: “What are the rules we’d ask people to follow in order to promote the happiest society possible?”. I’m sure you could nitpick at that, but it should be sufficient for this conversation. Example: the law against killing is good because if we didn’t have it, society would be worse off. Similarly, there are norms of certain preference ratios that lead to society being better off.
I don’t think we’d be better off if the norm was to have, say equal preference ratios for everyone in the world. Doing so is very unnatural would be very difficult, if not impossible. You have to weigh the costs of going against our impulses against the benefits that marginal conscientiousness would bring.
I’m not sure where the “equilibrium” points are. Honestly, I think I’d be lying to myself if I said that a preference ratio of 1,000,000,000:1 for you over another human would be overall beneficial to society. I suspect that subsequent generations will realize this and look at us in a similar way we look at Nazis (maybe not that bad, but still pretty bad). Morality seems to “evolve” from generation to generation.
Personally, my preference ratios are pretty bad. Not as bad as the average person because I’m less scope insensitive, but still bad. Ex. I eat out once in a while. You might say “oh well that’s reasonable”. But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.
But I continue to eat out once in a while, and honestly, I don’t feel (that) bad about it. Because I accept that my preference ratios are where they are (pretty much), and I think it makes sense for me to pursue the goal of achieving my preferences. To be less precise and more blunt, “I accept that I’m selfish”.
And so to answer your question:
Can we also go easy on ourselves relative to innate conscientiousness?
I think that the answer is yes. Main reason: because it’s unreasonable to expect that you change your ratios much.
Yeah, this is sooo real. On a logical level, it’s easy to recognize my scope insensitivity. On a “feeling” level, I still don’t feel like I have to go out and do something about it.
It’s great that you understand it on a logical level. No one has made much progress on the feeling level. As long as you’re aware of the bias and make an effort to massage your “feeling level” towards being more accurate, you should be fine.
But I don’t want to admit my preference ratios are that far out of whack; I don’t want to be that selfish.
Why?
I think that answering that exploring and answering that question will be helpful.
Try thinking about it in two ways:
1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.
2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren’t accurate, how can you nudge them to be more accurate.
This is BY FAR the best guidance anyone has ever given me in my life.
Wow! Thanks for letting me know. I’m really happy to help. I’ve been really impressed with your ability to pursue things, even when it’s uncomfortable. It’s a really important ability and most people don’t have it.
I think that not having that ability is often a bottleneck that prevents progress. Ex. an average person with that ability can probably make much more progress than a high IQ person without it (in some ways). It’s nice to have a conversation that actually progresses along nicely.
Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I’ll go back and find exactly where it was.
I think I have. I remember it being one of the few instances where it seemed to me that Eliezer was misguided. Although:
1) I remember going through it quickly and not giving it nearly as much thought as I would like. I’m content enough with my current understanding, and busy enough with other stuff that I chose to put it off until later. Although I do notice confusion—I very well may just be procrastinating.
2) I have tremendous respect for Eliezer. And so I definitely take note of his conclusions. The following thoughts are a bit dark and I hesitate to mention them… but:
a) Consider the possibility that he does actually agree with me, but he thinks that what he wrote will have a more positive impact on humanity (by influencing readers)
b) In the case that he really does believe what he writes, consider that it may not be best to convince him otherwise. Ie. he seems to be a very influential person in the field of FAI, and it’s very much in humanities interest for that person to be unselfish.
I haven’t thought this through enough to make these points public, so please take note of that. Also, if you wouldn’t mind summarizing/linking to where and why he disagrees with me, I’d very much appreciate it.
They both laughed, then Harry turned serious again. “The Sorting Hat did seem to think I was going to end up as a Dark Lord unless I went to Hufflepuff,” Harry said. “But I don’t want to be one.”
“Mr. Potter...” said Professor Quirrell. “Don’t take this the wrong way. I promise you will not be graded on the answer. I only want to know your own, honest reply. Why not?”
Harry had that helpless feeling again. Thou shalt not become a Dark Lord was such an obvious theorem in his moral system that it was hard to describe the actual proof steps. “Um, people would get hurt?”
“Surely you’ve wanted to hurt people,” said Professor Quirrell. “You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt.”
Sorry, I feel like I’m linking to too many things which probably feels overwhelming. Don’t feel like you have to read anything. Just thought I’d give you the option.
b) he should have realized that people have a strong bias against serial killers.
Yeah, this was irrational. He should have remembered his terminal value of creating change instead of focusing on his instrumental value of getting as many people as possible to read his manifesto. -gives self a little pat on back for using new terminology-
The reason I think his concerns are valid is because capitalism tries to optimize for wanting
Could you please elaborate on this idea a little? Anyway, thanks for the link (don’t apologize for linking so much, I love the links and read through and try to digest about 80% of them...). The liking/wanting difference is intuitive, but actually putting it into words is really helpful. I’m interested in exactly how you tie it in with Kaczynski, and I also think it’s relevant to my current dilemma.
Anyway, Scott’s example about smoking makes it seem as if people want to smoke but don’t like it. I think it’s the opposite; they like smoking, but don’t want to smoke. Do I really have these two words backwards? We need definitions. I think “liking” has more to do with your preferences, while “wanting” has to do with your goals. I recognize in myself, that if I like something, it’s very hard for me not to want it, and personally I find matrix-type philosophy questions to actually be difficult. That’s why I’ve never tried smoking; I was scared I might like it and start to want it. Without having tried it, it’s easy to say that it’s not what I want for myself. Is this only because I think it would bring me less happiness in the long run? I don’t think so. Even if you told me with certainty that smoking (or drugs) feels so incredibly good and is so incredibly fun that it could bring me happiness that outweighs the unhappiness caused by the bad stuff, I still wouldn’t want it! And I have no idea why. Which makes me wonder… what if I had never experienced how wonderful a fun-filled mostly-hedonic lifestyle is? Would I truly want it? Or am I just addicted?
You might say “oh well that’s reasonable”. But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.
Funny that you mention this example; I wouldn’t say it’s reasonable. Let me share a little story. When I was way younger, maybe 10 years ago, I went through a brief phase where I tried to convince my friends and family that eating at restaurants was wrong, saying “What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks… you would feel guilty about eating at the restaurant instead of helping, right? (“yes”) This is your conscience, right? (“yes”) Your conscience is from God, right? (“yes”) People in Africa are just as important as people in the US, right? (“yes”) Therefore, isn’t wrong to eat at a restaurant instead of donating the money to help starving kids in Africa? (“no”) Why? (“it just isn’t!”)… at which point they would insist that if I truly believed this was wrong, I should act accordingly, and I just told them “No, I can’t, I’m too selfish… and besides, saving eternal souls is more important than feeding starving children.” Then I looked at all the smart, unselfish adults I knew who still ate at restaurants, told myself I must be wrong somehow, and avoided thinking about the issue until we read Singer’s Famine, Affluence, and Morality in college (In my final semester, this was the class where it first occurred to me that there was nothing wrong with putting effort into school beyond what was necessary for perfect grades). I was really excited when we read it and was eagerly anticipating discussing it the next class to finally hear if someone could give a solid refutation of my old idea. My professor cancelled class that day, and we never went back to the topic. I cared, but unfortunately not quite enough to go talk to my professor outside of class. That was for nerds. So I went on believing it was “wrong” to eat in restaurants, but to protect my sanity, didn’t think about it or do anything about it, even after de-converting from Christianity… until I came across Scott’s post Nobody Is Perfect, Everything is Commensurable which seems incredibly obvious in hindsight, yet was exactly what I needed to hear at the time.
I don’t think we’d be better off if the norm was to have, say equal preference ratios for everyone in the world.
I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm. Whether this is possible is another question entirely, but I keep trying to rid myself of the habit of thinking natural = better (personally, I see this habit as another effect of Christianity; I’m continually amazed to find just how much of my worldview it shaped).
I think that answering that exploring and answering [Why don’t I want selfish preference ratios?] will be helpful.
I want to answer this question with “because emotion!” Is this allowed? Or is it akin to “explaining” something by calling it an emergent phenomenon?
1) Rationally, I can’t trace this back any farther than calling it a feeling. Was I born with this feeling? Is it the result of society? I don’t know. I don’t honestly think unselfish preference ratios would lead to a personal increase in my overall happiness, that’s for sure. Take effective altruism, for example. When I donate money, I don’t feel warm and fuzzy. I get a very small amount of personal satisfaction, societal respect, and a tiny reduction in the (already very small) guilt I feel for having such a good life. But honestly I rarely think about it, and I’m 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks’ vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don’t know why. So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.
2) Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.
As for your points about Eliezer...
a) Yeah, I have considered this too. But I think most of his audience is rational enough that if he said something that wasn’t rational, his credibility could take a hit. Whether this would stop him and how much of a consequentialist he really is, I have no idea.
b) Yeah, this is an interesting microcosm of the issue of whether we want to believe what is true vs. what is best for society. That said, I’m not saying Eliezer is wrong. My intuition does take his side now, but I usually don’t trust my intuitions very much.
Anyway, I went back through the book and found the title of the post. It’s Terminal Values and Instrumental Values. You can jump to “Consider the philosopher.”
Harry had that helpless feeling again. Thou shalt not become a Dark Lord was such an obvious theorem in his moral system that it was hard to describe the actual proof steps. “Um, people would get hurt?”
“Surely you’ve wanted to hurt people,” said Professor Quirrell. “You wanted to hurt those bullies today. Being a Dark Lord means that people you want to hurt get hurt.”
Good quote! Right now, I interpret this as showing how personal happiness and “altruism/not becoming a Dark Lord” are both inexplicable, perhaps sometimes competing terminal values… how do you interpret it?
Could you please elaborate on this idea a little? … I’m interested in exactly how you tie it in with Kaczynski, and I also think it’s relevant to my current dilemma.
Sure!
In brief: Kaczynski seems to have realized that economies are driven by wanting, not liking, and that this will lead to unhappiness. I think that that conclusion is too strong though—I’d just say that it’ll lead to inefficiency.
Longer explanation: ok, so the economy is pretty much driven by what people choose to buy, and where people choose to work. People aren’t always so good at making these choices. One reason is because they don’t actually know what will make them happy.
Example: job satisfaction is important. There are lots of subtle things that influence job satisfaction. For example, there’s something about things like farming that produces satisfaction and contentment. People don’t value these things enough → these jobs disappear → people miss out on the opportunity to be satisfied and content.
Another reason why people aren’t good at making choices is because they don’t always have the willpower to do what they know they should.
Example: if people were smart, McDonalds wouldn’t be the huge empire that it is. People choose to eat at McDonalds because they don’t weigh the consequences it has on their future selves enough. The reason why McDonalds is huge is because tons of people make these mistakes. If people were smart, MealSquares and McDonalds would be flip-flopped.
Kaczynski seems to focus more on the first example, but I think they’re both important. Economies are driven by the decisions we make. Given the predictable mistakes people make, society will suffer in predictable ways. Kaczynski seems to have realized this.
I avoided using the terms “wanting” and “liking” on purpose. I’ll just say quickly words are just symbols that refer to things and as long as the two people are using the same symbol-thing mappings, it doesn’t matter. What’s important is that you seem to understand the distinction between the two things as far as wanting/liking goes. I do see what you mean about the term “wanting”, and now that I think about it I agree with you.
(I’ve avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)
Edit: I’m about 95% sure that there’s actual neuroscience research behind the wanting vs. liking thing. Ie. they’ve found distinct a brain area that corresponds to wanting, and they’ve found a different distinct brain area that corresponds to liking.
Note: I studied neuroscience in college. I did research in a lab where we studied vision in monkeys, and part of this involved stimulating the monkeys brain. There was a point where we were able to get the monkey to basically make any eye movement we want (based on where and how much we stimulated). It didn’t provide me with any new information as far as free will goes, but literally seeing it in person with my own eyes influenced me on an emotional level.
That’s why I’ve never tried smoking; I was scared I might like it and start to want it.
Interesting, I’ve never smoked, drank or done any drugs at all for similar reasons. Well, that’s part of the story.
Would I truly want it? Or am I just addicted?
I’m going to guess that the reason why you wouldn’t want to do drugs even if you knew they’d make you happy is because a) it’d sort of numb you away from thinking critically and making decisions, and b) you wouldn’t get to do good for the world. Your current lifestyle doesn’t seem to be preventing you from doing either of those.
“What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks… you would feel guilty about eating at the restaurant instead of helping, right?
:) I’ve proposed the same thought experiment except with buying diamonds. Eg. “Imagine that you go to the diamond store to buy a diamond, and there were x thousand starving kids in the parking lot who you could save if you spent the money on them instead. Would you still buy the diamond?”
And in the case of diamonds, it’s not only a) the opportunity cost of doing good with the money—it’s that b) you’re supporting an inhumane organization and c) you’re being victim to a ridiculous marketing scheme that gets you to pay tens of thousands of dollars for a shiny rock. The post Diamonds are Bullshit on Priceonomics is great.
Furthermore, people do a, b and c in the name of love. To me, that seems about as anti-love as it gets. Sorry, this is a pet peeve of mine. It’s amazing how far you could push a human away from what’s sensible. If I had an online dating profile, I think it’d be, “If you still think you’d want a diamond after reading this, then I hate you. If not, let’s talk.”
I know I haven’t acknowledged the main counterargument, which is that the sacrifice is a demonstration of commitment, but there are ways of doing that without doing a, b and c.
Why? (“it just isn’t!”)
That sort of thinking baffles me as well. I’ve tried to explain to my parents what a cost-benefit analysis is… and they just don’t get it. This post has been of moderate help to me because I understood what virtue ethics are after reading it (and I never understood what it is before reading it)
People who say “it just isn’t” don’t think in terms of cost-benefit analyses. They just have ideas about what is and isn’t virtuous. As people like us have figured out, if you follow these virtues blindly, you’ll run into ridiculousness and/or inconsistency.
However, this isn’t to say that virtue-driven thinking doesn’t have it’s uses. Like all heuristics, they trade accuracy for speed, which sometimes is a worthy trade-off.
I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm.
I’m glad to hear you disagree :) But I sense that I may not have explained what I think and why I think it. If you could just flip a switch and make everyone have equal preference ratios, I think that’d probably be a good thing.
What I’m trying to say is that there is no switch, and that making our preference ratios more equal would be very difficult. Ex. try to make yourself care as much about a random accountant in China as much as you do about, say your Aunt. As far as cost-benefit analysis goes, the effort and unease of doing this would be a cost. I sense that the costs aren’t always worth the benefits, and that given this, it’s socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?
Good quote! Right now, I interpret this as showing how personal happiness and “altruism/not becoming a Dark Lord” are both inexplicable, perhaps sometimes competing terminal values… how do you interpret it?
I interpret it as “Harry seems to think there are good reasons for choosing certain terminal values. Terminal values seem arbitrary to me.”
(I’ve avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)
Nope, your longer explanation was perfect, and now I understand, thanks. I’m just a little curious why you would say those things lead to inefficiency instead of unhappiness, but you don’t have to elaborate any more here unless you feel like it.
Well, that’s part of the story.
Again, now I’m slightly curious about the rest of it...
I’m going to guess that the reason why you wouldn’t want to do drugs even if you knew they’d make you happy is because a) it’d sort of numb you away from thinking critically and making decisions, and b) you wouldn’t get to do good for the world. Your current lifestyle doesn’t seem to be preventing you from doing either of those.
Good guess. You’re right. But (I initially thought) smoking would hardly prevent those things, and I still don’t want to smoke. Then again, addiction could interfere with a), and the opportunity cost of buying cigarettes could interfere with b).
I’ve proposed the same thought experiment except with buying diamonds.
No way! A while back, I facebook-shared a very similar link about the ridiculousness of the diamond marketing scheme and proposed various alternatives to spending money on a diamond ring. I wasn’t even aware that the organization was inhumane.. yikes, information like that should be common knowledge. Also, probably at least some people don’t really want to get a diamond ring… but by the time the relationship gets serious, they can’t get themselves to bring it up (girls don’t want to be presumptuous, guys don’t want to risk a conflict?) so yeah, definitely a good kind of thing to get out of the way in a dating profile, haha.
This post has been of moderate help to me because I understood what virtue ethics are after reading it.
Wow, that’s so interesting, I’d never heard of virtue ethics before. I have many thoughts/questions about this, but let’s save that conversation for another day so my brain doesn’t suffer an overuse injury. My inner virtue-ethicist wants to become a more thoughtful person, but I know myself well enough to know that if I dive into all this stuff head first, it will just end up to be “a weird thinking phase I went through once” and instrumentally, I want to be thoughtful because of my terminal value of caring about the world.
(My gut reaction: Virtues are really just instrumental values that make life convenient for people whose terminal values are unclear/intimidating. (Like how the author of the link chose loyalty as a virtue. I bet we could find a situation in which she would abandon that loyalty.) But I also think that there’s a place for cost-benefit analysis even within virtue ethics, and that virtue ethicists with thoughtfully-chosen virtues can be more efficient consequentialists, which probably doesn’t make much sense, but I’d like to be both, please!)
If you could just flip a switch and make everyone have equal preference ratios, I think that’d probably be a good thing...it’s socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?
Oh, yeah, that makes sense to me. Kind of like capitalism, it seems to work better in practice if we just acknowledge human nature. But gradually, as a society, we can shift the preference ratios a bit, and I think we maybe are. :) We can point to a decrease in imperialism, the budding effective altruism movement, or even veganism’s growing popularity as examples of this shifting preference ratio.
Nope, your longer explanation was perfect, and now I understand, thanks. I’m just a little curious why you would say those things lead to inefficiency instead of unhappiness, but you don’t have to elaborate any more here unless you feel like it.
I didn’t mean anything deep by that. Inefficiency just means “less than optimal” (or at least that’s what I mean by it). For him to say that it will lead to actual unhappiness would mean that the costs are so great that they overcome any associated benefits and push whatever our default state is down until it reaches actual unhappiness. I suspect that the forces aren’t strong enough to push us too far off our happiness “set points”.
Again, now I’m slightly curious about the rest of it...
I like your point about being afraid/ashamed to do something and the two cases in general and with regard to drinking as a social lubricant.
I’ll post my drinking experience over there too, though I don’t have too much to say.
Not the most formal sources, but at least it’ll be entertaining :)
Haha, ok
It seems that you don’t want to think about this now. If you end up thinking about it in the future, let me know—I’d love to hear your thoughts!
How convenient. I thought about it a bit more after all. I actually still like my initial idea of virtues being instrumental values. I commented on the link you sent me, but a lot of my comment is similar to what I commented here yesterday…
I actually still like my initial idea of virtues being instrumental values.
As a consequentialist, that’s how I’m inclined to think of it too. But I think it’s important to remember that non-consequentialists actually think of virtues as having intrinsic value. Of being virtuous.
But I don’t want to admit my preference ratios are that far out of whack; I don’t want to be that selfish.
Why?
I think that exploring and answering that question will be helpful.
Try thinking about it in two ways:
1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.
2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren’t accurate, how can you nudge them to be more accurate.
You:
I want to answer this question with “because emotion!” Is this allowed?
Also:
Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.
Absolutely! That’s how I’d start off. But the question I was getting at is “why does your brain produce those emotions”. What is the evolutionary psychology behind it? What events in your life have conditioned you to produce this emotion?
By default, I think it’s natural to give a lot of weight to your emotions and be driven by them. But once you really understand where they come from, I think it’s easier to give them a more appropriate weight, and consequently, to better achieve your goals. (1,2,3)
And you could manipulate your emotions too. Examples: You’ll be less motivated to go to the gym if you lay down on the couch. You’ll be more motivated to go to the gym if you tell your friends that you plan on going to the gym every day for a month.
So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.
So you don’t think terminal goals are arbitrary? Or are you just proclaiming what yours are?
Edit:
But honestly I rarely think about it, and I’m 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks’ vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don’t know why.
Are you sure that this has nothing to do with maximizing happiness? Perhaps the reason why you still want to donate is to preserve an image you have of yourself, which presumably is ultimately about maximizing your happiness.
(Below is a thought that ended up being a dead end. I was going to delete it, but then I figured you might still be interested in reading it.)
Also, an interesting thought occurred to me related to wanting vs. liking. Take a person who starts off with only the terminal goal of maximizing his happiness. Imagine that the person then develops an addiction, say to smoking. And imagine that the person doesn’t actually like smoking, but still wants to smoke. Ie. smoking does not maximize his happiness, but he still wants to do it. Should he then decide that smoking is a terminal goal of his?
I’m not trying to say that smoking is a bad terminal goal, because I think terminal goals are arbitrary. What I am trying to say is that… he seems to be actually trying to maximize his happiness, but just failing at it.
DEAD END. That’s not true. Maybe he is actually trying to maximize his happiness, maybe he isn’t. You can’t say whether he is or he isn’t. If he is, then it leads you to say “Well if your terminal goal is ultimately to maximize your happiness… then you should try to maximize your happiness (if you want to achieve your terminal goals).” But if he isn’t (just) trying to maximize happiness, he could add in whatever other terminal goals he wants. Deep down I still notice a bit of confusion regarding my conclusion that goals are arbitrary, and so I find myself trying to argue against it. But every time I do I end up reaching a dead end :/
Anyway, I went back through the book and found the title of the post. It’s Terminal Values and Instrumental Values. You can jump to “Consider the philosopher.”
Thank you! That does seem to be a/the key point in his article. Although “I value the choice” seems like a weird argument to me. I never thought of it as a potential counter argument. From what I can gather from Eliezer’s cryptic rebuttal, I agree with him.
I still don’t understand what Eliezer would say to someone that said, “Preferences are selfish and Goals are arbitrary”.
1- Which isn’t to imply that I’m good at this. Just that I sense that it’s true and I’ve had isolated instances of success with it.
2 - And again, this isn’t to imply that you shouldn’t give emotions any weight and be a robot. I used to be uncomfortable with just an “intuitive sense” and not really understanding the reasoning behind it. Reading How We Decide changed that for me. 1) It really hit me that there is “reasoning” behind the intuitions and emotions you feel. Ie. your brain does some unconscious processing. 2) It hit me that I need to treat these feelings as Bayesian evidence and consider how likely it is that I have that intuition when the intuition is wrong vs. how likely it is that I have the intuition when the intuition is right.
3 - This all feels very “trying-to-be-wise-sounding”, which I hate. But I don’t know how else to say it.
Oops, just when I thought I had the terminology down. :( Yeah, I still think terminal values are arbitrary, in the sense that we choose what we want to live for.
So you think our preference is, by default, the happiness mind-state, and our terminal values may or may not be the most efficient personal happiness-increasers. Don’t you wonder why a rational human being would choose terminal goals that aren’t? But we sometimes do. Remember your honesty in saying:
Regarding my happiness, I think I may be lying to myself though. I think I rationalize that the same logic applies, that if I achieve some huge ambition there’d be a proportional increase in happiness. Because my brain likes to think achieving ambition → goodness and I care about how much goodness gets achieved. But if I’m to be honest, that probably isn’t true.
I have an idea. So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an “altruism mutation” and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It’s a pleasant thought, anyway.
But honestly, I literally didn’t even know what evolution was until several weeks ago though, so I don’t really belong bringing up any science at all yet; let me switch back to personal experience and thought experiments.
For example, let’s say my preferences are 98% affected by selfishness and maybe 2% by altruism, since I’m very stingy with my time but less so with my money. (Someone who would die for someone else would have different numbers.) Anyway, on the surface I might look more altruistic because there is a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I’m 100% selfish. When I donate to effective charities, I do receive benefits like liking myself a bit more, real or perceived respect from the world, a small burst of fuzzy feelings, and a decrease in the (admittedly small) amount of personal guilt I feel about the world’s unfairness. But if I had to put a monetary value on the happiness return from a $1000 donation, it would be less than $1000. When I use a preference ratio and prefer other people’s happiness, their happiness does make me happy, but there isn’t a direct correlation between how happy it makes me and the extent to which I prefer it. So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?
Also, what about diminishing marginal returns with donating? Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it’s realistic, it’s just scope insensitivity, right?
But similarly, let’s say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF. Maybe, you’re thinking that this difference would affect her mind-state, that she wouldn’t be able to think of himself as such a rational person if she did that. But who really values their self-image of being a rational opportunity-cost analyzer that highly? I sure don’t (well, 99.99% sure anyway).
Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness
Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here… occam’s razor?
Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion, I’ll attempt to find its origin. Not from giving to church (it was only fair that the pastors/teachers/missionaries get their salaries and the members help pay for building costs, electricity, etc). I guess there was the whole “we love because He first loved us” idea, which I knew well and regurgitated often, but don’t think I ever truly internalized. I consciously knew I’d still care about others just as much without my faith. Growing up, I knew no one who donated to secular charity, or at least no one who talked about it. The only thing I knew that came close to resembling large-scale altruism was when people chose to be pastors and teachers instead of pursuing high-income careers, but if they did it simply to “follow God’s will” I’m not sure it still counts as genuinely caring about others more than yourself. On a small-scale, my mom was really altruistic, like willing to give us her entire portion of an especially tasty food, offer us her jacket when she was cold too, etc… and I know she wasn’t calculating cost-benefit ratios, haha. So I guess she could have instilled it in me? Or maybe I read some novels with altruistic values? Idk, any other ideas?
I still don’t understand what Eliezer would say to someone that said, “Preferences are selfish and Goals are arbitrary”.
I’m no Eliezer, but here’s what I would say: Preferences are mostly selfish but can be affected by altruism, and goals are somehow based on these preferences. Whether or not you call them arbitrary probably depends on how you feel about free will. We make decisions. Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50? We don’t know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn’t really matter too much.
The way I’m (operationally) defining Preferences and words like happy/utility, Preferences are by definition what provides us what the most happiness/utility. Consider this thought experiment:
You start off as a blank slate and your memory is wiped. You then are experience some emotion, and you experience this emotion to a certain magnitude. Let’s call this “emotion-magnitude A”.
You then experience a second emotion-magnitude—emotion-magnitude B. Now that you have experienced two emotion-magnitudes, you could compare them and say which one was more preferable.
You then experience a third emotion magnitude, and insert it into the list [A, B] according to how preferable it was. And you do this for a fourth emotion-magnitude. And a fifth. Until eventually you do it for every possible emotion-magnitude (aka conscious state aka mind-state). You then end up with a list of every possible emotion-magnitude ranked according to desirability. [1...n]. These, are your Preferences.
So the way I’m defining Preferences, it refers to how desirable a certain mind-state is relative to other possible mind-states.
Now think about consequentialism and how stuff leads to certain consequences. Part of the consequences is the mind-state it produces for you.
Say that:
Action 1 → mind-state A
Aciton 2 → mind-state B
Now remember mind-states could be ranked according to how preferable they are, like in the thought experiment. Suppose that mind-state A is preferable to mind-state B.
From this, it seems to me that the following conclusion is unavoidable:
Action 1 is preferable to Action 2.
In other words, Action 1 leads you to a state of mind that you prefer over the state of mind that Action 2 leads you to. I don’t see any ways around saying that.
To make it more concrete, let’s say that Action 1 is “going on vacation” and Action 2 is “giving to charity”.
IF going on vacation produces mind-state A.
IF giving to charity produces mind-state B.
IF mind-state A is preferable to mind-state B.
THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.
I call this “preferable”, but in this case words and semantics might just be distracting. As long as you agree that “going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to” when the first three bullet points are true, I don’t think we disagree about anything real, and that we might just be using different words for stuff.
Thoughts?
Don’t you wonder why a rational human being would choose terminal goals that aren’t?.
I do, but mainly from a standpoint in being interested in human psychology. I also wonder from a standpoint of hoping that terminal goals aren’t arbitrary and that they have an actual reason for choosing what they choose, but I’ve never found their reasoning to be convincing, and I’ve never found their informational social influence to be strong enough evidence for me to think that terminal goals aren’t arbitrary.
So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an “altruism mutation” and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It’s a pleasant thought, anyway.
:))) [big smile] (Because I hope what I’m about to tell you might address a lot of your concerns and make you really happy.)
I’m pleased to tell you that we all have “that altruism mutation”. Because of the way evolution works, we evolve to maximize the spread of our genes.
So imagine that there’s two Mom’s. They each have 5 kids, and they each enter an unfortunate situation where they have to choose between themselves and their kids.
Mom 1 is selfish and chooses to save herself. Her kids then die. She goes on to not have any more kids. Therefore, her genes don’t get spread at all.
Mom 2 is unselfish and chooses to save her kids. She dies, but her genes live on through her kids.
The outcome of this situation is that there are 0 organisms with selfish genes, and 5 with unselfish genes.
And so humans (and all other animals, from what I know) have evolved a very strong instinct to protect their kin. But as we know, preference ratios diminish rapidly from there. We might care about our friends and extended family, and a little less about our extended social group, and not so much about the rest of people (which is why we go out to eat instead of paying for meals for 100s of starving kids).
As far as evolution goes, this also makes sense. A mom that acts altruistically towards her social circle would gain respect, and the tribes respect may lead to them protecting that mom’s children, thus increasing the chances they survive and produces offspring themselves. Of course, that altruistic act by the mom may decrease her chances of surviving to produce more offspring and to take care of her current offspring, but it’s a trade-off.* On the other hand, acting altruistically towards a random tribe across the world is unlikely to improve her children’s chances of surviving and producing offspring, so the mom’s that did this have historically been less successful at spreading genes than the mom’s that didn’t.
*Note: using mathematical models to simulate and test these trade-offs is the hard part of studying evolution. The basic ideas are actually quite simple.
But honestly, I literally didn’t even know what evolution was until several weeks ago though
I’m really sorry to hear that. I hope my being sorry isn’t offensive in any way. If it is, could you please tell me? I’d like to avoid offending people in the future.
so I don’t really belong bringing up any science at all yet;
Not so! Science is all about using what you do know to make hypothesis about the world and to look for observable evidence to test them. And that seems to be exactly what you were doing :)
Your hypotheses and thought experiments are really impressive. I’m beginning to suspect that you do indeed have training and are denying this in order to make a status play. [joking]
Like one human was born with an “altruism mutation” and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios?
I’d just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).
You seem to be saying that the mutation would spread because the organism remains alive. Think about it—if an organism has a mutation that increases the chances that it remain alive but that doesn’t increase the chances of having viable offspring, then that mutation would only remain in the gene pool until he died. And so of all the bajillions of our ancestors, only the ones still alive are candidates for the type of evolution you describe (mutations that only increase your chance of survival). Note that evolution is just the process of how genes spread.
Note: I’ve since realized that you may know this already, but figured I’d keep it anyway.
Okay, I guess I should have known some terminology correction was coming. If you want to define “happiness” as the preferred mind-state, no worries. I’ll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.
IF going on vacation produces mind-state A.
IF giving to charity produces mind-state B.
IF mind-state A is preferable to mind-state B.
THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.
Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is. An individual must strike a balance between his desire for pleasure and his desire to be altruistic to achieve Harmonious Happiness (Look, I made up a term with capital letters! LW is rubbing off on me!)
I’m pleased to tell you that we all have “that altruism mutation”. Because of the way evolution works, we evolve to maximize the spread of our genes.
Yay!!! I didn’t think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.
I’m really sorry to hear that. I hope my being sorry isn’t offensive in any way. If it is, could you please tell me? I’d like to avoid offending people in the future.
I almost never get offended, much less about this. I appreciate the sympathy! But others could find it offensive in that they’d find it arrogant. My thoughts on arrogance are a little unconventional. Most people think it’s arrogant to consider one person more gifted than others or one idea better than others. Some people really are more gifted and have far more positive qualities than others. Some ideas really are better. If you happen to be one of the more gifted people or understand one of the better ideas (evolution, in this case), and you recognize yourself as more gifted or recognize an idea as better, that’s not arrogance. Not yet. That’s just an honest perspective on value. Once you start to look down on people for being less gifted than you are or having worse ideas, that’s when you cross the line and become arrogant. If you are more gifted, or have more accurate ideas, you can happily thank the universe you weren’t born in someone else’s shoes, while doing your best to imagine what life would have been like if you were. You can try to help others use their own gifts to the best of their potential. You can try to share your ideas in a way that others will understand. Just don’t look down on people for not having certain abilities or believing the correct ideas because you really can’t understand what it’s like to be them :) But yeah, if you don’t want to offend people, it’s dangerous to express pity. Some people will look at your “feeling sorry” for those who don’t share your intelligence/life opportunities/correct ideas and call you arrogant for it, but I think they’re wrong to do so. There’s a difference between feeling sorry for people and looking down on them. For example, I am a little offended when one Christian friend and her dad who was my high school Calculus teacher look down on me. Most of my other friends just feel sorry for me, and I would be more offended if they didn’t, because feeling sorry at least shows they care.
Your hypotheses and thought experiments are really impressive. I’m beginning to suspect that you do indeed have training and are denying this in order to make a status play.
I’m flattered!! But I must confess the one thought experiment that was actually super good, the one at the end about free will, wasn’t my idea. It was a paraphrase of this guy’s idea and I had used it in the past to explain my deconversion to my friends. The other ideas were truly original, though :) (Not to say no one else has ever had them! Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas, like how I felt when I first read Famine, Affluence and Morality ten years after trying to convince my family it was wrong to eat in restaurants)
I’d just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).
Hey, this sounds like what I was just reading this week in the rationality book about Adaptation-Executers, not Fitness-Maximizers! I think I get this, and maybe I didn’t write very clearly (or enough) here, but maybe I still don’t fully understand. But if someone is nice to have around, wouldn’t he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.
Note: I just read your note and now have accordingly decreased the probability that I had said something way off-base :)
I’ll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.
I agree that in most cases (sociopaths are an exception) pleasure and doing good for others are both things that determine how happy something makes you. And so in that sense, it doesn’t seem that we disagree about anything real.
But you use romantic sounding wording. Ex. “special recognition as an ultimate motivator”.
“ultimate motivator”
So they way motivation works is that it’s “originally determined” by our genes, and “adjusted/added to” by our experiences. So I agree that altruism is one of our “original/natural motivators”. But I wouldn’t say that it’s an ultimate motivator, because to me that sounds like it implies that there’s something final and/or superseding about altruism as a motivator, and I don’t think that’s true.
“special recognition”
I’m going to say my original thought, and then I’m going to say how I have since decided that it’s partially wrong of me.
My original thought is that “there’s no such thing as a special motivator”. We could be conditioned to want anything. Ie. to be motivated to do anything. The way I see it, the inputs are our genes and our experiences, and the output is the resulting motivation, and I don’t see how one output could be more special than another.
But that’s just me failing to use the word special as is customary by a good amount of people. One use of the word special would mean that there’s something inherently different about it, and it’s that use that I argue against above. But another way people use it is just to mean that it’s beautiful or something. Ie. even though altruism is an output like any other motivation, humans find that to be beautiful, and I think it’s sensible to use the word special to describe that.
This all may sound a lot like nitpicking, and it sort of is, but not really. I actually think there’s a decent chance that clarifying what I mean by these words will bring us a lot closer to agreement.
Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is.
True, but that wasn’t the point I was making. I was just using that as an example. Admittedly, one that isn’t always true.
Yay!!!
I’m curious—was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.
So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness?
And that this makes you sad and that you’d be happier if people did indeed have some sort of altruism “built in”.
I didn’t think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.
I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be “altruistic to our genes”, but it’s a common and understandable error to instinctively think about society as we know it. In actuality, we’ve been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I’m commenting just in case you didn’t)
My thoughts on arrogance are a little unconventional.
Not here they’re not :) And I think that description was quite eloquent.
I used to be bullied and would be sad/embarrassed if people made fun of me. But at some point I got into a fight, ended it, and had a complete 180 shift of how I think about this. Since then, I’ve sort of decided that it doesn’t make sense at all to be “offended” by anything anyone says about you. What does that even mean? That your feelings are hurt? The way I see it:
a) Someone points out something that is both fixable and wrong with you, in which case you should thank them and change it. And if your feelings get hurt along the way, that’s just a cost you have to incur along the path of seeking a more important end (self improvement).
b) Someone points out something about you that is not fixable, or not wrong with you. In that case they’re just stupid (or maybe just wrong).
In reality, I’m exaggerating a bit because I understand that it’s not reasonable to expect humans to react like this all the time.
It was a paraphrase of this guy’s idea and I had used it in the past to explain my deconversion to my friends.
Haha, I see. Well now I’m less impressed by your intellect but more impressed with your honesty!
Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas
Yea, me too. But isn’t it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn’t express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don’t think I would have arrived at)
But if someone is nice to have around, wouldn’t he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.
But I wouldn’t say that it’s an ultimate motivator, because to me that sounds like it implies that there’s something final and/or superseding about altruism as a motivator, and I don’t think that’s true.
Yes, that’s exactly what I meant to imply! Finally, I used the right words. Why don’t you think it’s true?
I don’t see how one output could be more special than another.
I did just mean “inherently different” so we’re clear here. I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.
I’m curious—was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.
Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn’t really suspect it (but I wouldn’t have ruled it out entirely). The “Yay!!” was about there being evidence/logic to support my intuition being true.
I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be “altruistic to our genes”, but it’s a common and understandable error to instinctively think about society as we know it. In actuality, we’ve been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I’m commenting just in case you didn’t)
Prisons didn’t exist, but enemies did, and totally selfish people probably have more enemies… so yeah, I understand :)
I’ve sort of decided that it doesn’t make sense at all to be “offended” by anything anyone says about you.
No, you’re right! Whenever someone says something and adds “no offense” I remark that there must be something wrong with me, because I never take offense at anything. I’ve used your exact explanation to talk about criticism. I would rather hear it than not, because there’s a chance someone recognizes a bad tendency/belief that I haven’t already recognized in myself. I always ask for negative feedback from people, there’s no downside to it (unless you already suffer from depression, or something).
In real life, the only time I feel offended/mildly annoyed by what someone flat-out claims I’m lying, like when my old teacher said he didn’t believe me that I spent years earnestly praying for a stronger faith. But even as I was mildly annoyed, I understood his perspective completely because he either had to disbelieve me or disbelieve his entire understanding of the Bible and a God who answers prayer.
Yea, me too. But isn’t it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn’t express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don’t think I would have arrived at)
Yeah, ditto all the way! It’s entirely great :) I feel off the hook to go freely enjoy my life knowing it’s extremely probable that somewhere else, people like you, people who are smarter than I am, will have the ambition to think through all the good ideas and bring them to fruition.
I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.
I think we’ve arrived at a core point here.
See my other comment:
I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they’re the only two ultimate motivators. Or at least I can’t think of any other supposed motivation that couldn’t be traced back to one or both of these.
In a way, I think this is true. Actually, I should give more credit to this idea—yeah, it’s true in an important way.
My quibble is that motivation is usually not rational. If it was, then I think you’d be right. But the way our brains produce motivation isn’t rational. Sometimes we are motivated to do something… “just because”. Ie. even if our brain knows that it won’t lead to happiness or goodness, it could still produce motivation.
And so in a very real sense, motivation itself is often something that can’t really be traced back. But I try really hard to respond to what people’s core points are, and what they probably meant. I’m not precisely sure what your core point is, but I sense that I agree with it. That’s the strongest statement I could make.
Unfortunately, I think my scientific background is actually harming me right now. We’re talking about a lot of things that have very precise scientific meanings, and in some cases I think you’re deviating from them a bit. Which really isn’t too big a deal because I should be able to infer what you mean and progress the conversation, but I think I’m doing a pretty mediocre job of that. When I reflect, it difficult to deviate from the definitions I’m familiar with, which is sort of bad “conversational manners”, because the only point of words in a conversation is to communicate ideas, and it’d probably be more efficient if I were better able to use other definitions.
Back to you:
Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn’t really suspect it (but I wouldn’t have ruled it out entirely). The “Yay!!” was about there being evidence/logic to support my intuition being true.
So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?
The way I’m defining preference ratios:
Preference ratio for person X = how much you care about yourself / how much you care about person X
Or, more formally, how many units of utility person X would have to get before you’d be willing to sacrificing one unit of your own utility for him/her.
So what does altruism mean? Does it mean “I don’t need to gain any happiness in order for me to want to help you, but I don’t know if I’d help you if it caused me unhappiness.”? Or does it mean “I want to help you regardless of how it impacts my happiness. I’d go to hell if it meant you got one extra dollar.”
[When I was studying for some vocab test in middle school my cards were in alphabetical order at one point and I remember repeating a thousand times—“altruism: selfless concern for others. altruism: selfless concern for others. altruism: selfless concern for others...”. That definition would imply the latter.]
Let’s take the former definition. In that case, you’d want person X to get one unit of utility even if you get nothing in return, so your preference ratio would be 0. But this doesn’t necessarily work in reverse. Ie. in order to save person X from losing one unit of utility, you probably wouldn’t sacrifice a bajillion units of your own utility. I very well might be confusing myself with the math here.
Note: I’ve been trying to think about this but my approach is too simplistic and I’ve been countering it, but I’m having trouble articulating it. If you really want me to I could try, otherwise I don’t think it’s worth it. Sometimes I find math to be really obvious and useful, and sometimes I find it to be the exact opposite.
Also, what about diminishing marginal returns with donating?
This depends on the person, but I think that everyone experiences it to some extent.
Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it’s realistic, it’s just scope insensitivity, right?
If the person is trying to maximize happiness, the question is just “how much happiness would a marginal 1k donation bring” vs. “how much happiness would a 1k vacation bring”. The answers to these questions depend on the person.
Sorry, I’m not sure what you’re getting at here. The person might be scope insensitive to how much impact the 1k could have if he donated it.
But similarly, let’s say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF.
Yes, the optimal donation strategy for maximizing your own happiness is different from the one that maximizes impact :)
Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness
2, 3 and 4 are examples of people not trying to maximize their happiness.
1 is me sometimes knowingly following an impulse my brain produces even when I know it doesn’t maximize my happiness. Sadly, this happens all the time. For example, I ate Chinese food today, and I don’t think that doing so would maximize my long-term happiness.
In the case of my ambitions, my brain produces impulses/motivations stemming from things including:
Wanting to do good.
Wanting to prove to myself I could do it.
Wanting to prove to others I could do it.
Social status.
Brains don’t produce impulses in perfect, or even good alignment with what it expects will maximize utility. I find the decision to eat fast food as an intuitive example of this. But I don’t see how this changes anything about Preferences or Goals.
Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here… occam’s razor?
I’m sorry, I’m trying to understand what you’re saying but I think I’m failing. I think the problem is that I’m defining words differently than you. I’m trying to figure out how you’re defining them, but I’m not sure. Anyway, I think that if we clarify our definitions, we’d be able to make some good progress.
altruism could shape our preferences like happiness
The way I’m thinking about it… think back to my operational definition of preferences in the first comment where I talk about how an action leads to a mind-state. What action leads to what mind-state depends on the person. An altruistic action for you might lead to a happy mind-state, and that same action might lead me to a neutral mind-state. So in that sense altruism definitely shapes our preferences.
I’m not sure if you’re implying this, but I don’t see how this changes the fact that you could choose to strive for any goal you want. That you could only say that a means is good at leading to an end. That you can’t say that and end is good.
Ie. I could chose the goal of killing people, and you can’t say that it’s a bad goal. You could only say that it’s bad at leading to a happy society. Or that it’s bad at making me happy.
Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here… occam’s razor?
That’s a term that I don’t think I have a proper understanding of. There was a point when I realized that it just means that A & B is always less likely than A, unless B = 1. Like let’s say that the probability of A is .75. Even if B is .999999, P(A & B) < P(A). And so in that sense, simpler = better.
But people use it in ways that I don’t really understand. Ie. sometimes I don’t get what they mean by simpler. I don’t see that the term applies here though.
Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion
I think it’d be helpful if you defined specifically what you mean by altruism. I mean, you don’t have to be all formal or anything, but more specific would be useful.
As far as socially conditioned emotions goes, our emotions are socially conditioned to be happy in response to altruistic things and sad in response to anti-altruistic things. I wouldn’t say that that makes altruism itself a socially conditioned emotion.
Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50?
Wow, that’s a great way to put it! You definitely have the head of a scientist :)
We don’t know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn’t really matter too much.
Yeah, this has gotten a little too tangled up in definitions. Let’s try again, but from the same starting point.
Happiness=preferred mind-state (similar, potentially interchangeable terms: satisfaction, pleasure)
Goodness=what leads to a happier outcome for others (similar, potentially interchangeable terms: morality, altruism)
I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they’re the only two ultimate motivators. Or at least I can’t think of any other supposed motivation that couldn’t be traced back to one or both of these.
Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice, i.e. it makes the virtue ethicist happy and she believes it benefits society? I’m guessing that in certain situations, the author might even abandon the loyalty virtue if it conflicted with the underlying motivations of happiness and goodness. Thoughts?
Edit: I guess I’m realizing the way you defined preference doesn’t work for me either, and I should have said so in my other comment. I would say prefer simply means “tend to choose.” You can prefer something that doesn’t lead to the happiest mind-state, like a sacrificial death, or here’s an imaginary example:
You have to choose:
Either you catch a minor cold, or a mother and child you will never meet will get into a car accident. The mother will have serious injuries, and her child will die. Your memory of having chosen will be erased immediately after you choose regardless of your choice, so neither guilt nor happiness will result. You’ll either suddenly catch a cold, or not.
Not only is choosing to catch a cold an inefficient happiness-maximizer like donating to effective charities, this time it will actually have a negative effect on your happiness mind-state. Can you still prefer that you catch a cold? According to what seems to me like common real-world usage of “prefer” you can. You are not acting in some arbitrary, irrational, inexplicable way in doing so. You can acknowledge you’re motivated by goodness here, rather than happiness.
I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they’re the only two ultimate motivators. Or at least I can’t think of any other supposed motivation that couldn’t be traced back to one or both of these.
In a way, I think this is true. Actually, I should give more credit to this idea—yeah, it’s true in an important way.
My quibble is that motivation is usually not rational. If it was, then I think you’d be right. But the way our brains produce motivation isn’t rational. Sometimes we are motivated to do something… “just because”. Ie. even if our brain knows that it won’t lead to happiness or goodness, it could still produce motivation.
And so in a very real sense, motivation itself is often something that can’t really be traced back. But I try really hard to respond to what people’s core points are, and what they probably meant. I’m not precisely sure what your core point is, but I sense that I agree with it. That’s the strongest statement I could make.
Unfortunately, I think my scientific background is actually harming me right now. We’re talking about a lot of things that have very precise scientific meanings, and in some cases I think you’re deviating from them a bit. Which really isn’t too big a deal because I should be able to infer what you mean and progress the conversation, but I think I’m doing a pretty mediocre job of that. When I reflect, I find it difficult to deviate from the definitions I’m familiar with, which is sort of bad “conversational manners”, because the only point of words in a conversation is to communicate ideas, and it’d probably be more efficient if I were better able to use other definitions.
Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice
Haha, you seem to be confused about virtue ethics in a good way :)
A true virtue ethicist would completely and fully believe that their virtue is inherently desirable, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn’t care whether the loyalty lead to happiness or goodness.
Now, I think that consequentialism is a more sensible position, and I think you do too. And in the real world, virtue ethicists often have virtues that include happiness and goodness. And if they run into a conflict between say the virtue of goodness and the one of loyalty, well I don’t know how they’d resolve it, but I think they’d give some weight to each, and so in practice I don’t think virtue ethicists end up acting too crazy, because they’re stabilized by their virtues of goodness and happiness. On the other hand, a virtue ethicist without the virtue of goodness… that could get scary.
I guess I’m realizing the way you defined preference doesn’t work for me either
I hadn’t thought about it before, but now that I do I think you’re right. I’m not using the word “prefer” to mean what it really means. In my thought experiment I started off using it properly in saying that one mind-state is preferable to another.
But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it’s commonly used. In the way it’s commonly used, an action is preferable… if you prefer it.
I’m feeling embarrassed that I didn’t realize this immediately, but am glad to have realized it now because it allows me to make progress. Progress feels so good! So...
THANK YOU FOR POINTING THIS OUT!
According to what seems to me like common real-world usage of “prefer” you can.
Absolutely. But I think that I was wrong in an even more general sense than that.
So I think you understood what I was getting at with the thought experiment though—do you have any ideas about what words I should substitute in that would make more sense?
(I think that the fact that this is the slightest bit difficult is a huge failure of the english language. Language is meant to allow us to communicate. These are important concepts, and our language isn’t giving us a very good way to communicate them. I actually think this is a really big problem. The linguistic-relativity hypothesis basically says that our language restricts our ability to think about the world, and I think (and it’s pretty widely believed) that it’s true to some extent (the extent itself is what’s debated).)
In a way, I think this is true. Actually, I should give more credit to this idea—yeah, it’s true in an important way.
Yay, agreement :)
My quibble is that motivation is usually not rational. If it was, then I think you’d be right. But the way our brains produce motivation isn’t rational. Sometimes we are motivated to do something… “just because”. Ie. even if our brain knows that it won’t lead to happiness or goodness, it could still produce motivation.
Great point. I actually had a similar thought and added the qualifier “psychological” in my previous comment. Maybe “rational” would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology? And don’t feel bad about it, I’m sure the benefits of studying science outweigh the cost of the occasional decrease in conversation efficiency :)
A true virtue ethicist would completely and fully believe that their virtue is inherently desirably, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn’t care whether the loyalty lead to happiness or goodness.
Then I think very, very few virtue ethicists actually exist, and virtue ethicism is so abnormal it could almost qualify as a psychological disorder. Like the common ethics dilemma of exposing hidden Jews. If someone’s virtue was “honesty” they would have to. (In the philosophy class I took, we resolved this dilemma by redefining “truth” and capitalizing; e.g. Timmy’s father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old “correspondence theory” in ten seconds flat. I will accept any further sympathy you wish to express. Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.
Edit:
A person with extremely low concern for goodness is a sociopath.
The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio. And some canceling occurs in this ratio because of overlap.
But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it’s commonly used. In the way it’s commonly used, an action is preferable… if you prefer it.
Yes! I wish I could have articulated it that clearly for you myself.
Instead of saying we “prefer” an optimal mind-state… you could say we “like” it the most, but that might conflict with your scientific definitions for likes and wants. But here’s an idea, feel free to critique it...
“Likes” are things that actually produce the happiest, optimal mind-states within us
“Wants” are things we prefer, things we tend to choose when influenced by psychological motivators (what we think will make us happy, what we think will make the world happy)
Some things, like smoking, we neither like (or maybe some people do, idk) nor want, but we still do because the physical motivators overpower the psychological motivators (i.e. we have low willpower)
I think that the fact that this is the slightest bit difficult is a huge failure of the english language.
Great point. I actually had a similar thought and added the qualifier “psychological” in my previous comment. Maybe “rational” would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology?
Hmmm, so the question I’m thinking about is, “what does it mean to say that a motivation is traced back to something”. It seems to me that the answer to that involves terminal and instrumental values. Like if a person is motivated to do something, but is only motivated to do it to the extent that it leads to the persons terminal value, then it seems that you could say that this motivation can be traced back to that terminal value.
And so now I’m trying to evaluate the claim that “motivations can always be traced back to happiness and goodness”. This seems to be conditional on happiness and goodness being terminal goals for that person. But people could, and often do choose whatever terminal goals they want. For example, people have terminal goals like “self improvement” and “truth” and “be man” and “success”. And so, I think that for a person with a terminal goal other than happiness and goodness, they will have motivations that can’t be traced back to happiness or goodness.
But I think that it’s often the case that motivations can be traced back to happiness and goodness. Hopefully that means something.
(In the philosophy class I took, we resolved this dilemma by redefining “truth” and capitalizing; e.g. Timmy’s father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old “correspondence theory” in ten seconds flat.
Wait… so the Timmy example was used to argue against correspondence theory? Ouch.
Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.
Perhaps. Truth might be an exception for some people. Ex. some people may choose to pursue the truth even if it’s guaranteed to lead to decreases in happiness and goodness. And success might also be an exception for some people. They also may choose to pursue success even if it’s guaranteed to lead to decreases in happiness and goodness. But this becomes a question of some sort of social science rather than of philosophy.
The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio.
I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.
Instead of saying we “prefer” an optimal mind-state… you could say we “like” it the most, but that might conflict with your scientific definitions for likes and wants.
Eh, I think that this would conflict with the way people use the word “like” in a similar way to the problems I ran into with “preference”. For example, it makes sense to say that you like mind-state A more than mind-state B. But I’m not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term “like”. Damn language! :)
And so now I’m trying to evaluate the claim that “motivations can always be traced back to happiness and goodness”. This seems to be conditional on happiness and goodness being terminal goals for that person.
I had just reached the same conclusion myself! So I think that yeah, happiness and goodness are the only terminal values, for the vast majority of the thinking population :)
Note: I really don’t like the term “happiness” to describe the optimal mind-state since I connect it too strongly with “pleasure” so maybe “satisfaction” would be better. I think of satisfaction as including both feelings of pleasure and feelings of fulfillment. What do you think?
For example, people have terminal goals like “self improvement” and “truth” and “be man” and “success”
I think that all these are really just instrumental goals that people subconsciously, and perhaps mistakenly, believe will lead them to their real terminal goals of greater personal satisfaction and/or an increase in the world’s satisfaction.
Wait… so the Timmy example was used to argue against correspondence theory? Ouch.
It was an example of whatever convoluted theory my professor invented as a replacement for correspondence theory.
But this becomes a question of some sort of social science rather than of philosophy.
Exactly. I think people like the ones you mention are quite rare.
I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.
Ok, thanks :)
Eh, I think that this would conflict with the way people use the word “like” in a similar way to the problems I ran into with “preference”. For example, it makes sense to say that you like mind-state A more than mind-state B. But I’m not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term “like”. Damn language! :)
What if language isn’t the problem? Maybe the connection between mind-states and actions isn’t so clear-cut after all. If you like mind-state A more than mind-state B, then action A is mind-state-optimizing, but I’m not sure you can go much farther than that… because goodness.
I had just reached the same conclusion myself! So I think that yeah, happiness and goodness are the only terminal values, for the vast majority of the thinking population :)
:)
Note: I really don’t like the term “happiness” to describe the optimal mind-state since I connect it too strongly with “pleasure” so maybe “satisfaction” would be better. I think of satisfaction as including both feelings of pleasure and feelings of fulfillment. What do you think?
I haven’t found a term that I really like. Utility is my favorite though.
I think that all these are really just instrumental goals that people subconsciously, and perhaps mistakenly, believe will lead them to their real terminal goals of greater personal satisfaction and/or an increase in the world’s satisfaction.
Idk, I want to agree with you but I sense that it’s more like 95% of the population. I know just the 2 people to ask though. My two friends are huge proponents of things like “give it your all” and “be a man”.
Also, what about religious people? Aren’t there things they value independent of happiness and goodness? And if so wouldn’t their motivations reflect that?
Edit:
Friend 1 says it’s ultimately about avoiding feeling bad about himself, which I classify as him wanting to optimize his mind-state.
Friend 2 couldn’t answer my questions and said his decisions aren’t that calculated.
Not too useful after all. I was hoping that they’d be more insightful.
mind-state-optimizing
Oooooo I like that term!
Maybe the connection between mind-states and actions isn’t so clear-cut after all.
It seems clear-cut to me. An action leads to one state of the world, and in that state of the world you have one mind-state. Can you elaborate?
but I’m not sure you can go much farther than that… because goodness.
Idk, I want to agree with you but I sense that it’s more like 95% of the population. I know just the 2 people to ask though. My two friends are huge proponents of things like “give it your all” and “be a man”
Yeah, ask those friends if in a situation where “giving it their all” and “being men” made them less happy and made the world a worse place, whether they would still stick with their philosophies. And if they genuinely can’t imagine a situation where they would feel less satisfied after “giving it their all,” then I would postulate that as they’re consciously pursuing these virtues, they’re subconsciously pursuing personal satisfaction.
(Edit: Just read a little further, that you already have their responses. Yeah, not too insightful, maybe I’ll develop this idea a bit more and ask the rest of the LW community what they think.)
(Edit #2: Thought about this a little more, and I have a question you might be able to answer. Is the subconscious considered psychological or physical?)
As for religious people...well, in the case of Christianity, people would probably just want to “become Christ-like” which, for them, overlaps really well with personal satisfaction and helping others. But in extreme cases, someone might truly aspire to “become obedient to X” in which case obedience could be the terminal value, even if the person doesn’t think obedience will make them happy or make the world a better place. But I think that such ultra-religiosity is rare, and that most people are still ultimately psychologically motivated to either do what they think will make them happy, or what they think will make the world a better place. I feel like this is related to Belief in Belief but I can’t quite articulate the connection. Maybe you’ll understand, if not, I’ll try harder to verbalize it.
It seems clear-cut to me. An action leads to one state of the world, and in that state of the world you have one mind-state.
No, if that’s all you’re saying, that “If you like mind-state A more than mind-state B, then action A is mind-state-optimizing” then I completely agree! For some reason, I read your sentence (“But I’m not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term “like”) and thought you were trying to say they necessarily like action A more..haha, oops
Yeah, ask those friends if in a situation where “giving it their all” and “being men” made them less happy and made the world a worse place, whether they would still stick with their philosophies.
How about this answer: “If that makes me less happy and makes the world a worse place, the world would be decidedly weird in a lot of fundamental and ubiquitous ways. I am unable to comprehend what such a weird world would be like in enough detail to make meaningful statements about what I would do in it.”
Let’s just focus on “giving it your all.” What is “it”?? You surely can’t give everything your all. How do you choose which goals to pursue? “Giving it your all” is a bit abstract.
Yeah, ask those friends if in a situation where “giving it their all” and “being men” made them less happy and made the world a worse place, whether they would still stick with their philosophies.
That’s exactly what I asked them.
The first one took a little prodding but eventually gave a somewhat passable answer. And he’s one of the smartest people I’ve ever met. The second one just refused to address the question. He said he wouldn’t approach it that way and that his decisions aren’t that calculated. I don’t know how you want to explain it, but for pretty much every person I’ve ever met or read, sooner or later they seem to just flinch away from the truth. You seem to be particularly good at not doing that—I don’t think you’ve demonstrated any flinching yet.
And see what I mean about how the ability to not flinch is often the limiting factor? In this case, the question wasn’t really difficult in an intellectual way at all. It just requires you to make a legitimate effort to accept the truth. The truth is often uncomfortable to people, and thus they flinch away, don’t accept it, and fail to make progress.
Thought about this a little more, and I have a question you might be able to answer. Is the subconscious considered psychological or physical?
I could definitely answer that! This really gets at the core of the map vs. the territory (maybe my favorite topic :) ). The physical/psychological distinction are just two maps we use to describe reality. In reality itself, the territory, there’s no such thing as physical/psychological. If you look at the properties of individual atoms, they don’t have any sort of property that says “I’m a physical atom” or “I’m a psychological atom”. They only have properties like mass and electric charge (as far as we know).
I’m not sure how much you know about science, but I find the physics-chemistry-biology spectrum to be a good demonstration of the different levels of maps. Physics tries to model reality as precisely as possible (well, some types of physics that is; others aim to make approximations). Chemistry approximates reality using the equations of physics. Biology approximates reality using the equations of chemistry. And you could even add psychology in there and say that it approximates reality using the ideas (not even equations) of biology.
As far as psychology goes, a little history might be helpful. It’s been a few years since I studied this, but here we go. In the early 1900s, behaviorism was the popular approach to psychology. They just tried to look at what inputs lead to what outputs. Ie. they’d say “if we expose people to situation X, how do they respond”. The input is the situation, and the output is how they respond.
Now, obviously there’s something going on that translates the input to the output. They had the sense that the translation happens in the brain, but it was a black box to them and they had no clue how it works. Furthermore, they sort of saw it as so confusing that there’s no way they could know how it works. And so behaviorists were content to just study what inputs lead to what outputs, and to leave the black box as a mystery.
Then in the 1950s there was the cognitive revolution where they manned up and ventured into the black box. They thought that you could figure out what’s going on in there and how the inputs get translated to outputs.
Now we’re almost ready to go back to your question—I haven’t forgotten about it. So cognitive psychology is sort of about what’s going on in our head and how we process stuff. Regarding the subconscious, even though we’re not conscious of it, there’s still processing going on in that black box, and so the study of that processing still falls under the category of cognitive psychology. But again, cognitive psychology is a high-level map. We’re not there yet, but we’d be better able to understand that black box with a lower level map like neuroscience. And we’d be able to learn even more about the black box using an even lower level map like physics.
If you have any other questions or even just want to chat informally about this stuff please let me know. I love thinking about this stuff and I love trying to explain things (and I like to think I’m pretty good at it) and you’re really good at understanding things and asking good questions which often leads me to think about things differently and learn new things.
But I think that such ultra-religiosity is rare, and that most people are still ultimately psychologically motivated to either do what they think will make them happy, or what they think will make the world a better place.
Interesting. I had the impression that religious people had lots of other terminal values. So things like “obeying God” aren’t terminal values? I had the impression that most religions teach that you should obey no matter what. That you should obey even if you think it’ll lead to decreases in goodness and happiness. Could you clarify?
Edit: I just realized something that might be important. You emphasize the point that there’s a lot of overlap between happiness/goodness and other potentially terminal values. I haven’t been emphasizing it. I think we both agree that there is the big overlap. And I think we agree that “actions can either be mind-state optimizing, or not mind-state optimizing” and “terminal values are arbitrary”.
I think you’re right to put the emphasis on this and to keep bringing it up as an important reminder. Being important, I should have given it the attention it deserves. Thanks for persisting!
I feel like this is related to Belief in Belief but I can’t quite articulate the connection. Maybe you’ll understand, if not, I’ll try harder to verbalize it.
It took me a while to understand belief in belief. I read the sequences about 2 years ago and didn’t understand it until a few weeks ago as I was reading HPMOR. There was a point when one of the characters said he believed something but acted as if he didn’t. Like if believed what he said he believed, he definitely would have done X because X is clearly in his interest. I just reread belief in belief, and now I feel like it makes almost complete sense to me.
From what I understand, the idea with belief in belief is that:
a) There’s your model of how you think the world will look.
b) And then there’s what you say you believe.
To someone who values consistency, a) and b) should be the same thing. But humans are weird, and sometimes a) and b) are different.
In the scenario you describe, there’s a religious person who ultimately wants goodness and would choose goodness over his virtues if he had to pick, but he nevertheless claims that his virtues are terminal goals to him. And so as far as a) goes, you both agree that he would choose goodness over his virtues. But as far as b) goes, you claim to believe different things. What he claims to believe is inconsistent with his model of the world, and so I think you’re right—this would be an example of belief in belief.
If you like mind-state A more than mind-state B, then action A is mind-state-optimizing
Yup, that’s all I’m trying to say. No worries if you misunderstood :). I hadn’t realized that this was ultimately all I was trying to say before talking to you and now I have, so thank you!
I don’t know how you want to explain it, but for pretty much every person I’ve ever met or read, sooner or later they seem to just flinch away from the truth. You seem to be particularly good at not doing that—I don’t think you’ve demonstrated any flinching yet.
Well, thanks! How does that saying go? What is true is already so? Although in the context of this conversation, I can’t say there’s anything inherently wrong with flinching; it could help fulfill someone’s terminal value of happiness. It someone doesn’t feel dissatisfied with himself and his lack of progress, what rational reason is there for him to pursue the truth? Obviously, I would prefer to live in a world where relentlessly pursuing the truth led everyone to their optimal mind-states, but in reality this probably isn’t the case. I think “truth” is just another instrumental goal (it’s definitely one of mine) that leads to both happiness and goodness.
In reality itself, the territory, there’s no such thing as physical/psychological.
Yeah! I think I first typed the question as “is it physical or psychological?” and then caught myself and rephrased, adding the word “considered” :) I just wanted to make sure I’m not using scientific terms with accepted definitions that I’m unaware of. Thanks for your answer!! You are really good at explaining stuff. I think the “cognitive psychology” is related to what I just read about last week in the ebook too, about neural networks, the two different brain map models, and the bleggs and rubes.
I just reread belief in belief, and now I feel like it makes almost complete sense to me.
I don’t know your religious background, but if you don’t have one, that’s really impressive, given that you haven’t actually experienced much belief-in-belief since Santa (if you ever did). But yeah, basically, this sentences summarizes perfectly:
But it is realistic to say the dragon-claimant anticipates as if there is no dragon in his garage, and makes excuses as if he believed in the belief.
Any time a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn’t exist. I realized this, and sometimes tried to convince myself and others that we were acting wrongly by not being more devout. I couldn’t shake the notion that spending time having fun instead of praying or sharing the gospel was somehow wrong because it went against God’s will of wanting all men being saved, and I believed God’s will, by definition, was right. But I still acted in accordance with my personal happiness some of the time. I said God’s will was the only an end-in-itself, but I didn’t act like it. So like you said, inconsistency. Thanks for helping me with the connection there.
Although in the context of this conversation, I can’t say there’s anything inherently wrong with flinching
I agree with you that there’s nothing inherently wrong with it, but I don’t think this is a case of someone making a conscious decision to pursue their terminal goals. I think it’s a case of “I’m just going to follow my impulse without thinking”.
I don’t know your religious background, but if you don’t have one, that’s really impressive, given that you haven’t actually experienced much belief-in-belief since Santa (if you ever did).
Haha thanks. I can’t remember ever believing in belief, but studying this rationality stuff actually teaches you a lot about how other people think.
I was raised Jewish, but people around me were about as not religious as it gets. I think it’s called Reform Judiasm. In practice it just means, “go to Hebrew school, have a Bar/Bat Mitzvah, celebrate like 3-4 holidays a year and believe whatever you want without being a blatant atheist”.
I’m 22 years old and I genuinely can’t remember the last time I believed in any of it through. I had my Bar Mitzvah when I was 13 and I remember not wanting to do it and thinking that it’s all BS. Actually I think I remember being in Hebrew school one time when we were being taught about God and I at the time believed in God, and I was curious how they knew that God existed and I asked, and they basically just said, “we just know”, and I remember being annoyed by that answer. And now I’m remembering being confused because I wanted to know what God really was, and some people told me he was human-like and had form, and some people just told me he was invisible.
I will say that I thoroughly enjoy Jewish humor though, and I thank the Jews very much for that :). Jews love making fun of their Jewish mannerisms, and it’s all in good fun. Even things that might seem mean are taken in good spirit.
Any time a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn’t exist.
Hey, um… I have a question. I’m not sure if you’re comfortable talking about it though. Please feel free to not answer.
It sounds really stressful believing that stuff. Like it seems that even people with the strongest faith spend some time deviating from those instructions and do things like have fun or pursue their personal interests. And then you’d feel guilty about that. Come to think of it, it sounds similar to my guilt for ever spending time not pursuing ambitions.
And what about believing in Hell? From what I understand, Christians believe that there’s a very non-negligible chance that you end up in Hell, suffering unimaginably for eternity. I’m not exaggerating at all when I say that if I believed that, I would be in a mental hospital crying hysterically and trying my absolute hardest to be a good person and avoid ending up in Hell. Death is one of my biggest fears, and I also fear the possibility of something similar to Hell, even though I think it’s a small possibility. Anyway, I never understood how people could legitimately believe in Hell and just go about their lives like everything is normal.
but I don’t think this is a case of someone making a conscious decision to pursue their terminal goals.
Few people make that many conscious decisions! But it could be a subconscious decision that still fulfills the goal. For my little sister, this kind of thing actually is a conscious decision. Last Christmas break when I first realized that unlike almost all of my close friends and family in Wisconsin, I didn’t like our governor all that much, she eventually cut me off, saying, “Dad and I aren’t like you, Ellen. We don’t like thinking about difficult issues.” Honesty, self-awareness, and consciously-selected ugh fields run in the family, I guess.
I was raised Jewish, but people around me were about as not religious as it gets.
That’s funny. I just met someone like you, probably also a Reform Jew, who told me some jokes and about all these Jewish stereotypes that I had never even heard of, and they seem to fit pretty well.
Hey, um… I have a question. I’m not sure if you’re comfortable talking about it though. Please feel free to not answer.
It sounds really stressful believing that stuff. Like it seems that even people with the strongest faith spend some time deviating from those instructions and do things like have fun or pursue their personal interests. And then you’d feel guilty about that. Come to think of it, it sounds similar to my guilt for ever spending time not pursuing ambitions.
It’s exactly like that, just multiplied times infinity (adjusted for scope insensitivity) because hell is eternal.
And what about believing in Hell? From what I understand, Christians believe that there’s a very non-negligible chance that you end up in Hell, suffering unimaginably for eternity. I’m not exaggerating at all when I say that if I believed that, I would be in a mental hospital crying hysterically and trying my absolute hardest to be a good person and avoid ending up in Hell.
Yeah, hell is basically what led me away from Christianity. If you’re really curious, how convenient, I wrote about it here to explain myself to my Christian friends. You’ll probably find it interesting. You can see how recent this is for me and imagine what a perfect resource the rationality book has been. I just wish I had discovered it just a few weeks earlier, when I was in the middle of dozens of religious discussions with people, but I think I did an okay job explaining myself and talking about biases I had recognized in myself but didn’t even know were considered “biases” like not giving much weight to evidence that opposes your preferred belief (label: confirmation bias) and the tendency to believe what people around you believe (label: I forget, but at least I now know it has one) and many more.
But how did I survive, believing in hell? Well, there’s this wonderful book of the Bible called Ecclesiastes that seems to mostly contradict the rest of Christian teachings. Most people find it depressing. Personally, I loved it and read it every week to comfort myself. I still like it, actually. It’s short, you could read it in no time, but here’s a sample from chapter 3: 18-22:
18 I also said to myself, “As for humans, God tests them so that they may see that they are like the animals. 19 Surely the fate of human beings is like that of the animals; the same fate awaits them both: As one dies, so dies the other. All have the same breath[c]; humans have no advantage over animals. Everything is meaningless. 20 All go to the same place; all come from dust, and to dust all return. 21 Who knows if the human spirit rises upward and if the spirit of the animal goes down into the earth?”
22 So I saw that there is nothing better for a person than to enjoy their work, because that is their lot. For who can bring them to see what will happen after them?
But it could be a subconscious decision that still fulfills the goal.
True. In the case of my friend, I don’t think it was, but in cases where it is, then I think that it could be a perfectly sensible approach (depending on the situation).
This was the relevant part of the conversation:
So what if it were the situation was: tell the truth and make you less happy, your family less happy, and the rest of the world unaffected, or lie.
i would never approach it that way. my decisions aren’t that calculated (at least not consciously).
It’s possible that he had legitimately decided earlier to not put that much calculation into these sorts of decisions, because he thinks that this strategy will best lead to his terminal goals of happiness or goodness or whatever. But this situation actually didn’t involve any calculation at all. The calculations were done for him already—he just had to choose between the results.
To me it seems more likely that he a) is not at all used to making cost-benefit analyses and makes his decisions by listening to his impressions of how virtuous things seem. And b) in situations of choosing between options that both produce unpleasant feelings of unvirtuousness, he flinches away from the reality of the (hypothetical) situation.
I should mention that I think that >99% of people are quite quite stupid. Most people don’t seem very agenty to me, given the way I define it. Most people seem to not put much thought behind the overwhelming majority of what they do and think and instead just respond to their immediate feelings and rationalize it afterwards. Most people don’t seem to have the open-mindedness to give consideration to ideas that go against their impulses (this isn’t to say that these impulses are useless), nor the strength to admit hard truths and choose an option in a lose-lose scenario.
Really, I don’t know how to word my thoughts very well on this topic. Eliezer addresses a lot of the mistakes people make in his articles. It’d take some time for me to really write up my thoughts on this. And I know that it makes me sound like a Bad Person for thinking that >99% people are really stupid, but unfortunate truths have to be dealt with. The following isn’t a particularly good argument, but perhaps it’s an intuitive one: consider how we think people 200 years ago were stupid, and people 200 years ago think people 400 years ago were stupid etc. (I don’t think this means that everyone will always be stupid. Ie. I think that not being stupid means something in an absolute sense, not just a relative one).
It’s exactly like that, just multiplied times infinity (adjusted for scope insensitivity) because hell is eternal.
I’m truly truly sorry that you had experienced this. No one should ever have to feel that. If there’s anything I could do or say to help, please let me know.
If you’re really curious, how convenient, I wrote about it here to explain myself to my Christian friends.
I had actually seen the link when I looked back at your first post in the welcome thread at some point. I confess that I just skimmed it briefly and didn’t pick up on the core idea. However, I’ve just read it more carefully.
I love your literary device. The Banana Tree thought experiment and analogy that is (I don’t actually know what I literary device is). And the fact that people believe that—a) God is caring, AND b) God created Hell and set the circumstances up where millions/billions of people will end up there—is… let’s just say inconsistent by any reasonable definition of the words consistent, caring and suffering.
In the same way that you talk about how God is bad for creating Hell, I actually think something similar about life itself. I’m a bit pessimistic. The happiness set point theory says that we have happiness set points and that we may temporarily deviate above or below them, but that we end up hovering back to our set points.
Furthermore, this set point seems to be quite neutral and quite consistent amongst humans. What I mean by neutral is that minute-to-minute, most people seem to be in a “chill” state of mind, not really happy or sad. And we don’t spend too much time deviating from that. And there’s also the reality that we’re all destined to die. Why does life have to be mediocre? Why can’t it be great? Why do we all have to get sick and die? I don’t know how or if reality was “created”, but to anthropomorphize, why did the creator make it like this? From the perspective of pre-origin-of-reality (if that’s even a thing), I feel the same feelings about neutralness that you expressed about the badness of Hell (but obviously Hell is far worse than neutralness). From a pre-origin perspective, reality could just as easily have been amazing and wonderful, so the fact that it’s neutral and fleeting seems… disappointing?
But how did I survive, believing in hell? Well, there’s this wonderful book of the Bible called Ecclesiastes that seems to mostly contradict the rest of Christian teachings. Most people find depressing. Personally, I loved it and read it every week to comfort myself. I still like it, actually. It’s short, you could read it in no time, but here’s a sample from chapter 3: 18-22:
If it got you through believing in hell, I will most certainly read it.
To me it seems more likely that he a) is not at all used to making cost-benefit analyses and makes his decisions by listening to his impressions of how virtuous things seem. And b) in situations of choosing between options that both produce unpleasant feelings of unvirtuousness, he flinches away from the reality of the (hypothetical) situation.
So a possible distinction between virtue ethicists and consequentialists: virtue ethicists pursue their terminal values of happiness and goodness subconsciously, while consequentialists pursue the same terminal values consciously… as a general rule? And so the consequentialists seem more agenty because they put more thought into their decisions?
I think that not being stupid means something in an absolute sense, not just a relative one)
I might agree with you about >99% of people being stupid. What exactly do you mean by it though? That they don’t naturally break things down like a reductionist? That they rarely seem to take control of their own lives, just letting life happen to them? Or are you talking about knowledge? We’ve definitely increased our knowledge over the past 400 years, but I don’t think we’ve really increased our intelligence.
And the fact that people believe that—a) God is caring, AND b) God created Hell and set the circumstances up where millions/billions of people will end up there—is… let’s just say inconsistent by any reasonable definition of the words consistent, caring and suffering.
Yeah, that’s what I was trying to get across, and it’s why I titled the post “Do You Feel Selfish for Liking What You Believe”! I hesitated to include the analogy since it was the only part with the potential to offend people (two people accused me of mocking God) and taint their thoughts about the rest of the post, but in the end I left it, partly as a hopefully thought-provoking interlude between the more theological sections and mostly so I could give my page a more fun title than Deconversion Story Blog #59845374987.
The happiness set point theory makes sense! Actually, it makes a lot of sense, and I think it’s connected to the idea that most people do not act in agenty ways! If they did, I think they could increase their happiness. Personally, I don’t find that it applies to me much at all. My happiness has steadily risen throughout my life. I am happier now than ever before. I am now dubbing myself a super-agent. I think the key to happiness is to weed not only the bad stuff out of your life, but the neutral stuff as well. Let me share some examples:
I got a huge scholarship after high school to pursue a career in the medicine field (I never expected to love my career, but that wasn’t the goal; I wanted to fund lots of missionaries). I was good at my science classes, and I didn’t dislike them, but I didn’t like them either. I realized this after my first year of college. I acknowledged the sunk cost fallacy, cut my losses, wrote a long, friendly letter to the benefactor to assuage my guilt, and decided to pursue another easy high-income career instead, law, which would allow me to major in anything I wanted. So I sat down for a few hours, considered like 6 different majors, evaluated the advantages and disadvantages, and came up with a tie between Economics and Spanish. I liked Econ for many reasons, but mainly because the subject matter itself was truly fascinating to me; I liked Spanish not so much for the language itself but because the professor was hilarious, fun, casual, and flexible about test/paper deadlines, I could save money by graduating in only 3 years, and I would get the chance to travel abroad. I flipped a coin between the two, and majored in Spanish. Result: a lasting increase in happiness.
My last summer after college, I was a cook at a boy scout camp. It was my third summer there. I worked about 80 hours a week, and the first two years I loved it because my co-workers were awesome. We would have giant (dumping 5 gallon igloos on each other in the middle of the kitchen, standing on the roof and dropping regular balloons filled with water on each other, etc) water fights in the kitchen, we would play cribbage in between meals, hang out together, etc. I also had two good friends among the counselors. Anyway, that third year, my friends had left and it was still a pretty good job in a pretty and foresty area, but it wasn’t super fun like it had been. So after the first half of the summer, once I had earned enough to pay the last of my college debt, I found someone to replace me at my job and wrote out pages of really detailed instructions for everything (to assuage my guilt), and quit, to go spend a month “on vacation” at home with my family before leaving for Guatemala. Result: a lasting increase in happiness.
I dropped down to work part-time in Guatemala to pursue competitive running more. I left as soon as I got a stress fracture. I chose a family to nanny for based on the family itself, knowing that would affect my day-to-day happiness more than the location (which also turned out to be great).
My belief in God was about to cause not only logical discontent in my mind, but also a suboptimal level of real life contentment that I could not simply turn into an “ugh field” as I almost set off to pursue a career I didn’t love to donate to missionaries. Whatever real-life security benefits it brought me were about to become negligible, so I finally spent a few very long and thoughtful days confronting my doubts and freed myself from that belief.
Every day examples of inertia-breaking happiness-inducing activities: I’m going for a run and run past a lilac bush. It smells really good, so I stop my watch and go stand by it for a while. I’m driving in the car, and there’s a pretty lookout spot, so I actually stop for a while. I do my favorite activities like board games, pickup sports, and nature stuff like hiking and camping every weekend, not just once in a while. I don’t watch TV because there’s always something I’d rather be doing. If I randomly wake up early, I consciously think about whether I would get more satisfaction out of lazing around in bed, or getting up to make a special breakfast for the kids I nanny for.
What’s my point? I have very noticeably different happiness levels based on the actions I take. If I’m just going with the flow, taking life as it comes, I have an average amount of happiness compared to those around me; I occasionally do let myself slip into neutral situations. If I put myself in a super fun and amazing situation, I have way more happiness than those around me (which is a good thing, since happiness is contagious). Sometimes I just look at my life and can’t help but laugh with delight at how wonderful it is. If I ever get a sense that my happiness is starting to neutralize/stabilize, I make a big change and get it back on the right track. For instance, I think that thanks to you, I have just realized that my happiness is not composed of pleasure alone, but also personal fulfillment. I always knew that “personal fulfillment” influenced other people, but I’m either just realizing/admitting this to myself, or my preferences are changing a bit as I get older, but I think it influences me too. So, I’m spending some time reading and thinking and writing, instead of only playing games and reading fiction and cooking and hiking. Result: I am even happier than I knew possible :)
Maybe I don’t fully understand that happiness set point theory, but I don’t think it is true for everyone, just 99% of people or so. I don’t think it is true for me. That said, I will acknowledge that an individual’s range of potential happiness levels is fixed. Some happy-born people, no matter how bad their lives get, will never become as unhappy as naturally unhappy people with seemingly good lives are.
Ok, could we like Skype or something and you tell me everything you know about being happy and all of your experiences? I have a lot to learn and I enjoy hearing your stories!
Also, idk if you’ve come across this yet but what you’re doing is something that us lesswrongers like to call WINNING. Which is something that lesswrongers actually seem to struggle with quite a bit. There’s a handful of posts on it if you google. Anyway, not only are you killing it, but you seem to be doing it on purpose rather than just getting lucky. This amount of success with this amount of intentionality just must be analyzed.
You sound like you are somewhat intimidated by the people here and that they all seem super smart and everything. Don’t be. Your ability to legitimately analyze things and steer your life in the direction you want it is way more rare than you’d guess. You should seriously write about your ideas and experiences here for everyone to benefit from.
Or maybe you shouldn’t. Idk. You probably already know this, but never just listen to me or what someone else tells you (obviously). My point really is that I sense that others could legitimately benefit from your stories—idk if you judge that writing about it is the best thing for you to be doing though.
Sorry if I’m being weird. Idk. Anyway, here are the beginnings of a lot of questions I have:
Your idea to avoid not only negative things but also neutral things sounded pretty good at first, and then made a lot more sense when I heard your examples. I started thinking about my own life and the choices I’ve made and am starting to see that your approach probably would have made me better off. But… I can’t help but point out that it can’t always be true. Sometimes the upfront costs of mediocrity must be worth the longer term benefits right? But it seems like a great rule-of-thumb. Why? What makes a good rule-of-thumb? Well, my impression is that aside from being mostly right, it’s about being mostly right in a way that people normally don’t get right. Ie. being useful. And settling for neutralness instead of awesomeness seems to be a mistake that people make a lot. My friends give me shit for being close-minded (which I just laugh at). They point out how I almost never get convinced and change my mind (which is because normal people almost never think of things that I haven’t taken into consideration myself already). Anyway, I think that this may actually change my outlook on life and lead to a change in behavior. Congratulations. …so my question here was “do you just consider this a rule of thumb, and to what extent?”
This question is more just about you as a case study rather than your philosophy (I hope that doesn’t make me sound too much like a robot) - how often do you find yourself sacrificing the short term for the long term? And what is your thinking in these scenarios? And in the scenarios when you choose not to? Stories are probably useful.
You say you did competitive running. Forgive me, but I’ve never understood competitive running. It’s so painful! I get that lighter runs can be pleasant, but competitive running seems like prolonged pain to me. And so I’m surprised to hear that you did that. But I anticipate that you had good reason for doing so. Because 1) it seems to go against your natural philosophy, and you wouldn’t deviate from your natural philosophy randomly (a Bayesian would say that the prior probability of this is low) and 2) you’ve demonstrated to be someone who reasons well and is a PC (~an agent).
There’s an interesting conversation to be had about video games/TV and happiness vs. “physical motivators”. I’m a huge anti-fan of videogames/TV too. I have a feeling you have some good thoughts on this.
Your thoughts on the extent to which strategic thinking is worth it. I see a cost-benefit of stress vs. increased likelihood of good decision. Also, related topic—I notice that you said you spent a big chunk of time making that major decision. One of my recent theories as to how I could be happier and more productive is to allocate these big chunks of time, and then not stress over optimizing the remaining small chunks of time, due to what I judge are the cost-benefit analyses. But historically, I tend to overthink things and suffer from the stress of doing so. A big part of this is because I see the opportunity to analyze things strategically everywhere, and every time I notice myself forgoing an opportunity, I kick myself. I know its not rational to pursue every analysis, but… my thoughts are a bit jumbled.
Just a note—I hope rationality doesn’t taint you in any way. I sense that you should err on the site of maintaining your approach. Incremental increases in rationality usually don’t lead to incremental increases in winning, so be careful. There’s a post on that somewhere I could look up for you if you want. Have you thought about this? If so, what have your thoughts been?
Do find mocking reality to be fun? I do sometimes. That didn’t make sense—let me explain. At some point in my junior year of college I decided to stop looking at my grades. I never took school seriously at all (since middle school at least). I enjoyed messing around. On the surface this may seem like I’m risking not achieving the outcomes I want, and that’s true, but it has the benefit of being fun, and I think that people really underestimate this. It was easy for me to not take school seriously, but I should probably apply this in life more. Idk. I’m also sort of good at taking materialistic things really not seriously. I ripped up $60 once to prove to myself that it really doesn’t matter :0. And it made me wayyy too happy, which is why I haven’t done it since (idk if that’s really really weird of me or not). I would joke around with my friends and say, “Yo, you wanna rip?”. And I really was offering them my own money up to say $100 to rip up so they could experience it for themselves. (And I fully admit that this was selfish because that money could have gone to starving kids, but so could a lot of the money I and everyone else spends. It was simply a trade of money for happiness, and it was one of the more successful ones I’ve made.) Anyway, I noticed that you flipped a coin to decide your major and got some sort of impression that something like this is your reasoning. But I only estimate a 20-30% probability of that.
I’m curious how much your happiness actually increased throughout your life. You seem to be evidence against the set point theory, which is huge. Or rather, that the set point theory in its most basic form is missing some things.
Actually, I should say that I’m probably getting a little carried away with my impressions and praise. I have to remember to take biases into account and acknowledge and communicate the truth. I have a tendency to get carried away when I come across certain ideas (don’t we all?). But I genuinely don’t think I’m getting that carried away.
Thoughts on long term planning.
Um, I’ll stop for now.
Time to go question every life decision I’ve ever made.
Also, idk if you’ve come across this yet but what you’re doing is something that us lesswrongers like to call WINNING. Which is something that lesswrongers actually seem to struggle with quite a bit. There’s a handful of posts on it if you google. Anyway, not only are you killing it, but you seem to be doing it on purpose rather than just getting lucky. This amount of success with this amount of intentionality just must be analyzed.
Hahaha, reading such fanmail just increased my happiness even more :) Sure, we can skype sometime. I’m going to wrap up my thoughts on terminal values first and then I’ll respond more thoroughly to all this, and maybe you can help me articulate some ideas that would be useful to share!
In the meantime, this reminded me of another little happiness tip I could share. So I don’t know if you’ve heard of the five “love languages” but they are words of affirmation, acts of service, quality time, gifts, and physical touch. Everyone gives and receives in different ways. For example, I like receiving words of affirmation, and I like giving quality time. My mom likes receiving in physical touch, and giving in acts of service. The family I nanny for (in general) likes receiving in quality time and giving in gifts (like my new kindle which they gave me just in time to get the rationality ebook!) For people that you spend a lot of time with-family, partner, best friends, boss, co-workers-this can be worthwhile to casually bring up in conversation. Now when people know words of affirmation make me happy, they’ll be more likely to let me know when they think of something good about me or appreciate something I do. If I know the family I nanny for values quality time, I might sit around the table and chat with them an extra hour even though I’m itching to go read more of the rationality book. I know my mom values physical touch, so I hug her a lot and stuff even though I’m not generally super touchy. Happiness all around, although these decisions do get to be habits pretty quickly and don’t require much conscious effort :)
Just submitted my first article! I really should have asked you to edit it… if you have any suggestions of stuff/wording to change, let me know, quick!
Anyway, I’ll go reply to your happiness questions now :)
Just submitted my first article! I really should have asked you to edit it… if you have any suggestions of stuff/wording to change, let me know, quick!
First very quick glance, there’s some things I would change. I’ll try to offer my thoughts quickly.
Edit: LW really need a better way of collaboration. Ex. https://medium.com/about/dont-write-alone-8304190661d4. One of the things I want to do is revamp this website. Helping rational people interact and pursue things seems to be relatively high impact.
Anyway, I’ll go reply to your happiness questions now :)
Hey, no rush. It’s a big topic and I don’t want to overwhelm you (or me!) by jumping around so much. Was there anything else you wanted to finish up first? Do you want to take a break from this intense conversation? I really don’t want to put any pressure on you.
Ok, yeah, let’s take a little break! I’m actually about to go on a road trip to the Grand Canyon, and should really start thinking about the trip and get together some good playlists/podcasts to listen to on the drive. I’ll be back on Tuesday though and will be ready jump back into the conversation :)
I learned something new and seemingly relevant to this discussion listening to a podcast on the way home from the Grand Canyon: Maslow’s hierarchy of needs, which as knowledgeable as you seem, you’re probably already familiar with. Anyway, I think I’ve been doing just fine on the bottom four my whole life. But here’s the fifth one:
Self-Actualization needs—realizing personal potential, self-fulfillment, seeking personal growth and peak experiences.
So it seems like I’m working backwards on this self-actualization list now. I’ve had tons of super cool peak experiences already. Now, for the first time, I’m kind of interested in personal growth, too. On the page I linked, it talked about characteristics of self-actualizers and behavior of self-actualizers… I think it all describes me already, except for “taking responsibility and working hard” and maybe I should just trust this psychology research and assume that if I become ambitious about something, it will actually make me even happier. What do you think? Have you learned much psychology? How relevant is this to rationality and intentionally making “winning” choices?
:) I remember reading about it for the first time in the parking lot when I was waiting for my Mom to finish up at the butcher. (I remember the place I was at when I learned a lot of things)
Psychology is very interesting to me and I know a pretty good amount about it. As far as things I’m knowledgeable about, I know a decent amount about: rationality, web development, startups, neuroscience and psychology (and basketball!). And I know a little bit about economics, science in general, philosophy, and maybe business.
Anyway, I think I’ve been doing just fine on the bottom four my whole life. But here’s the fifth one:
Interesting. I actually figured that you were good with the top one too. For now, I’ll just say that I see it as more of a multiplier than a hole to be filled up. Ie. someone with neutral self-actualization would mostly be fine—you multiply zero (neutral) by BigNumber. Contrast this with a hole-to-be-filled-up view, where you’re as fulfilled as the hole is full. (Note that I just made this up; these aren’t actual models, as far as I know). Anyway, in the multiplier view, neutral is much much better than negative, because the negative is multiplied by BigNumber. So please be careful!
Hi again :) I’m back from vacation and ready to continue our happiness discussion! I’m not sure how useful this will be since happiness is so subjective, but I’m more than willing to be analyzed as a case study, it sounds fun!
You sound like you are somewhat intimidated by the people here and that they all seem super smart and everything. Don’t be. Your ability to legitimately analyze things and steer your life in the direction you want it is way more rare than you’d guess.
Oh, I still am! I wouldn’t trade my ability to make happiness-boosting choices for all their scientific and historical knowledge, but that doesn’t mean I’m not humbled and impressed by it. Now for your bullet points...
Avoiding neutralness isn’t actually a rule of thumb I’ve consciously followed or anything. It just seemed like a good way to summarize the examples I thought of of acting to increase my happiness. It does seem like a useful rule of thumb though, and I’m psyched that you think it could help you/others to be happier :) I might even consciously follow it myself from now on. But you ask whether the upfront costs of avoiding mediocrity are sometimes worth the long term benefits… you may well be right, but I can’t come up with any examples off the top of my head. Can you?
I don’t have any clear strategies for choosing between short-term vs. long-term happiness. I think my general tendency is to favor short-term happiness, kind of a “live in the moment” approach to life. Obviously, this can’t be taken too far, or we’ll just sit around eating ice cream all day. Maybe a good rule of thumb—increase your short-term happiness as much as possible without doing anything that would have clear negative affects on your long-term happiness? Do things that make you happy in the short-term iff you think there’s a very low probability you’ll regret them? I think in general people place too much emphasis on the long-term. Like me choosing to change my major. If I ultimately were going to end up in a career I didn’t love, and I had already accepted that, what difference did it make what I majored in? In the long term, no predictable difference. But in the short-term, those last 2 years would quite possibly account for over 2% of my life. Which is more than enough to matter, more than enough to justify a day or two in deep contemplation. I think that if I consistently act in accordance with my short-term happiness, (and avoiding long-term unhappiness like spending all my money and having nothing left for retirement or eating junk food and getting fat) I’ll consistently be pretty happy. Could I achieve greater total happiness if I focused only on the long-term? Maybe! But I seem so happy right now, the potential reward doesn’t seem worth the risk.
I love that you asked about my competitive running. I do enjoy running, but I rarely push myself hard when I’m running on my own. The truth is, I wouldn’t have done it on my own. Running was a social thing for me. My best friend there was a Guatemalan “elite” (much lower standard in for this there than in the US, of course), and I was just a bit faster than she was. So we trained together, and almost every single practice was a little bit easier for me than it was for her. Gradually, we both improved a ton and ran faster and faster times, but I was always training one small notch below what I could have been doing, so it didn’t get too painful. In the races, my strategy was always negative splits—start out slowly, then pass people at the end. This was less painful and more fun. Of course, there was some pain involved, but I could short-term sacrifice a few minutes of pain in a race for long-term benefits of prize money and feeling good about the race the whole next week. But again, it was the social aspect that got me into competitive running. I never would have pursued it all on my own; it was just a great chance to hang out with friends, practice my Spanish, stay fit, and get some fresh air.
Is strategic thinking worth it? I have no idea! I don’t think strategically on purpose; I just can’t help it. As far as I know, I was born thinking this way. We took a “strengths quest” personality test in college and “Strategic” was my number one strength. (My other four were relator, ideation, competitive, and analytical). I’m just wired to do cost-benefit analyses, I guess. Come to think of it, those strengths probably play a big role in my happiness and rationality. But for someone who isn’t instinctively strategic, how important are cost-benefit analyses? I like your idea of allocating large chunks of time, but not worrying too much in the day-to-day stuff. This kind of goes back to consequentialism vs. virtue ethics. Ask yourself what genuinely makes you happy. If it’s satisfying curiosity, just aim to ‘become more curious’ as an instrumental goal. Maybe you’ll spend time learning something new when you actually would have been happier spending that time chatting with friends, but instrumental goals are convenient and if they’re chosen well, I don’t think they’ll steer you wrong very often. Then, if you need to, maybe set aside some time every so often and analyze how much time you spend each day doing which activities. Maybe rank them according to how much happiness they give you (both long and short term, no easy task) and see if you spend time doing something that makes you a little happy, but may not be the most efficient way to maximize your happiness. Look for things that make you really happy that you don’t do often enough. Don’t let inertia control you too much, either. There’s an old saying among runners that the hardest step is the first step out the door, and it’s true. I know I’ll almost always be glad once I’m running, and feel good afterward. If I ever run for like 5 minutes and still don’t feel like running, I’ll just turn around and go home. This has happened maybe 5 times, so overall, forcing myself to run even when I don’t think I feel like it has been a good strategy.
Thanks! I don’t think it will taint me too much. Honestly, I think I had exceptionally strong rationality skills even before I started reading the ebook. Some people have lots of knowledge, great communication skills, are very responsible, etc...and they’re rational. I haven’t developed those other skills so well (yet), but at least I’m pretty good at thinking. So yeah, honestly I don’t think that reading it is going to make me happy in that it’s going to lead me to make many superior decisions (I think we agree I’ve been doing alright for myself) but it is going to make me happy in other ways. Mostly identity-seeking ways, probably.
I got a kick out of your money ripping story. I can definitely see how that could make you way more happy than spending it on a few restaurant meals, or a new pair of shoes, or some other materialistic thing :) I wouldn’t do it myself, but I think it’s cool! As for not taking school seriously for the sake of fun, I can relate… I took pride in strategically avoiding homework, studying for tests and writing outlines for papers during other classes, basically putting in as little effort as I could get away with and still get good grades (which I wanted 90% because big scholarship money was worth the small trade-off and 10% simply because my competitive nature would be annoyed if someone else did better than I did). In hindsight, I think it would have been cool to pay more attention in school and come out with some actual knowledge, but would I trade that knowledge for the hours of fun hanging out with my neighbors and talking and playing board games with my family after school? Probably not, so I can’t even say I regret my decision.
As for me flipping a coin… I think that goes with your question about how much cost-benefit analysis it’s actually worthwhile to do. I seriously considered like 6 majors, narrowed it down to 2, and both seemed like great choices. I think I (subconsciously) thought of diminishing marginal returns and risk-reward here. I had already put a lot of thought into this, and there was no clear winner. What was the chance I would suddenly have a new insight and a clear winner would emerge if I just invested a few more hours of analysis, even with no new information? Not very high, so I quit while I was ahead and flipped a coin.
How much has my happiness actually increased? Some (probably due to an increase in autonomy when I left home) but not a ton, really… because I believe in a large, set happiness range, and the decisions I make keep me at the high end of it. But like I said, sometimes it will decrease to a “normal” level, and it’s soo easy to imagine just letting it stay there and not taking action.
I don’t think you’re getting carried away, either, but maybe we just think really alike :) but happiness is important to everyone, so if there’s any way it could be analyzed to help people, it seems worth a try
Long-term planning depends on an individual’s values. Personally I think most people overrate it a bit, but it all depends on what actually makes a person happy.
So a possible distinction between virtue ethicists and consequentialists: virtue ethicists pursue their terminal values of happiness and goodness subconsciously, while consequentialists pursue the same terminal values consciously… as a general rule?
I think that’s “true” in practice, but not in theory. An important distinction to make.
And so the consequentialists seem more agenty because they put more thought into their decisions?
Definitely.
What exactly do you mean by it though?
The problem is that I’m not completely sure :/. I think a lot of it falls under the category of being attached to their beliefs though. Here’s an example: I was just at lunch with a fellow programmer. He said that “the more programmers you put on a project the better”, and he meant it as an absolute rule. I pointed out the incredibly obvious point that it depends on the trade off between how much they cost and how much profit they bring in. He didn’t want to believe that he was wrong, and so he didn’t actually give consideration to what I was saying, and he continues to believe that “the more programmers you put on a project the better”.
This is an extreme case, but I think that analogous things happen all the time. The way I think about it, knowledge and aptitude don’t even really come in to play, because close-mindedness limits you so much earlier on than knowledge and aptitude do. “Not stupid” is probably a better term than “smart”. To me, in order to be “not stupid”, you just have to be open-minded enough to give things an honest consideration and not stubbornly stick to what you originally believe no matter what.
In short, I think I’d say that, to me, it’s mostly about just giving an honest effort (which is a lot harder than it sounds).
I hesitated to include the analogy since it was the only part with the potential to offend people (two people accused me of mocking God) and taint their thoughts about the rest of the post,
What are your objectives with this blog? To convince people? Because you like writing?
Edit: idea—maybe your way of having an impact on the world is to just keep living your awesome and happy life and lead by example. Maybe you could blog about it too. Idk. But I think that just seeing examples of people like you is inspiring, and could really have a pretty big impact. It’s inspired me.
I think that’s “true” in practice, but not in theory. An important distinction to make.
Haha, what?? Interesting.
To me, in order to be “not stupid”, you just have to be open-minded enough to give things an honest consideration and not stubbornly stick to what you originally believe no matter what.
Aha, so basically, to you, stupidity involves a lot of flinching away from ideas or evidence that contradict someone’s preconceived notions about the world. And lack of effort at overcoming bias. Yeah, most people are like that, even lots of people with high IQ’s and phd’s. I think you’re defining “stupid” as “irrational thinking + ugh fields” which was what I originally thought you meant until I read your example about past vs. present. Why do you think we’ll be less stupid in the future then? Just optimism, or is this connected to your thoughts on AI?
What are your objectives with this blog? To convince people? Because you like writing?....Maybe your way of having an impact on the world is to just keep living your awesome and happy life and lead by example. Maybe you could blog about it too.
In the case of the only three posts I’ve done, they were just to defend myself, encourage anyone else who was going through similar doubts, and stir up some cognitive dissonance. I do like writing though (not so much writing itself, I have a hard time choosing the right words… but I love sharing ideas) and maybe I will soon blog about how rationality can improve happiness :) :) I actually am just about to write a “Terminal Virtues” post and share my first idea on LW. And then I want to write something with far more practical value, a guide to communicating effectively and getting along well with less rational people :)
But I’ll say quickly that you seem like the most awesome person I’ve ever “met”. And I’m going to have to get some advice from you about being happy
Aw, well thanks! I am enjoying this conversation immensely, partly because I’ve never talked to someone else who was so strategic, analytic, and open-minded before, and knowledgeable, and I really appreciate those qualities. And partly because I feel like even the occasional people who think I’m awesome don’t appreciate me for quite the same reason I’ve always “appreciated” myself, which I always thought was “because I’m pretty good at thinking” which I can now call “rationality” :)
In practice, it seems to me that a lot of virtue ethicists value happiness and goodness a lot. But in theory, there’s nothing about being a virtue ethicist that says anything about what the virtues themselves are.
But I’m realizing that my incredibly literal way of thinking about this may not be that useful and that the things you’re paying attention to may be more useful. But at the same time, being literal and precise is often really important. I think that in this case both we could do both, and as a team we have :)
Yeah, most people are like that, even lots of people with high IQ’s and phd’s.
Exactly. Another possibly good way to put it. People who are smart in the traditional way (high IQ, PhD...) have their smartness limited very much to certain domains. Ie. there might be a brilliant mathematician who has proved incredibly difficult theorems, but just doesn’t have the strength to admit that certain basic things are true. I see a lot of traditionally smart people act very stupidly in certain domains. To me, I judge people at their worst when it comes to “not stupidness”, which is why I have perhaps extreme views. Idk, it makes sense to me. There’s something to be said for the ability to not stoop to a really low level. Maybe that’s a good way to put it—I judge people based on the lowness they’re capable of stooping to. (Man, I’m loosing track of how many important things I’ve come across in talking to you.)
And similarly with morality—I very much judge people by how they act when it’s difficult to be nice. I hate when people meet someone new and conclude that they’re “so nice” just because they acted socially appropriate by making small talk and being polite. Try seeing how that person acts when they’re frustrated and are protected by the anonymity of being in a car. The difference between people at their best and their worst is huge. This clip explains exactly what I mean better than I could. (I love some of the IMO great comedians like Louis CK, Larry David and Seinfeld. I think they make a handful of legitimately insightful points about society, and they articulate and explain things in ways that make so much sense. In an intellectual sense, I understand how difficult it is to communicate things in such an intuitive way. Every little subtlety is important, and you really have to break things down to their essence. So I’m impressed by a lot of comedians in an intellectual sense, and I don’t know many others who think like that.).
And I take pride in never/very rarely stooping to these low levels. I love basketball and play pick up a lot and it’s amazing how horrible people are and how low they stoop. Cheating, bullying, fighting, selfishness, pathetic and embarrassing ego dances etc. I never cheat, ever (and needless to say I would never do any of the other pathetic stuff). And people know this and never argue with me (well, not everyone).
not so much writing itself, I have a hard time choosing the right words… but I love sharing ideas
Oh. I love trying to find the right words. Well, sometimes it could be difficult, but I find it to be a “good difficult”. One of my favorite things to do, and one of the two or three things I think I’m most skilled at, is breaking things down to their essence. And that’s often what I think choosing the right words is about. (Although these comments aren’t exactly works of art :) )
maybe I will soon blog about how rationality can improve happiness
To the extent that your goal here is to influence people, I think it’s worth being strategic about. I could offer some thoughts if you’d like. For example, that blogger site you’re using doesn’t seem to get much audience—a site like https://medium.com/ might allow you to reach more people (and has a much nicer UI).
This is a really small point though, and there are a lot of other things to consider if you want to influence people. http://www.2uo.de/influence/ is a great book on how to influence people. It’s one of the Dark Arts of rationality. If you’re interested, I’d recommend putting it on your reading list. If you’re a little interested, I’d just recommend taking 5-10 minutes to read that post. If you’re not very interested, which something tells me is somewhat likely to be true, just forget it :)
One reason why I like writing is so I could refer people to my writing instead of having to explain it 100 times. Not that I ever mind explaining things, but at the same time it is convenient to just link to an article.
But a lot of people “write for themselves”. Ie. they like to get their ideas down in words or whatever, but they make it available in case people want to read it.
I am enjoying this conversation immensely, partly because I’ve never talked to someone else who was so strategic, analytic, and open-minded before, and knowledgeable, and I really appreciate those qualities.
I try :)
even the occasional people who think I’m awesome
Are you trying to be modest? I can’t imagine anyone not thinking that you’re awesome.
don’t appreciate me for quite the same reason I’ve always “appreciated” myself, which I always thought was “because I’m pretty good at thinking” which I can now call “rationality” :)
Yea, I feel the same way, although it doesn’t bother me. It takes a rational person to appreciate another rational person (“real recognize real”), and I don’t have very high expectations of normal people.
Terminal values are ends-in-themselves. They are psychological motivators, reasons that explain decisions. (Physical motivators like addiction and inertia can also explain our decisions, but a rational person might wish to overcome them.) For most people, the only true terminal values are happiness and goodness. There is almost always significant overlap between the two. Someone who truly has a terminal value that can’t be traced back to happiness or goodness in some way is either (a) ultra-religious or (b) a special case for the social sciences.
Happiness (“likes”) refers to the optimalness of your mind-state. Hedonistic pleasure and personal fulfillment are examples of things that contribute to happiness.
Goodness refers to what leads to a happier outcome for others.
Preferences (“wants”) are what we tend to choose. These can be based on psychological or physical motivators.
Instrumental values are goals or virtues that we think will best satisfy the terminal values of happiness and goodness.
We are not always aware of what actually leads to optimal mind-states in ourselves and others.
Good question. I conclude that morality (which, as far as I can tell, seems like the same thing as goodness and altruism) does exist, that our desire to be moral is the result of evolution (thanks for your scientific backup) just as much as our selfish desires are results of evolution. Whatever you call happiness, goodness falls into the same category. I think that some people are mystified when they make decisions that inefficiently optimize their happiness (like all those examples we talked about), but they shouldn’t be. Goodness is a terminal value too.
Also, morality is relative. How moral you are can be measured by some kind of altruism ratio that compares your terminal values of happiness and goodness. Someone can be “more moral” than others in the sense that he would be motivated more by goodness/altruism than he is by his own personal satisfaction, relative to them.
Is there any value in this idea? No practical value, except whatever personal satisfaction value an individual assigns to clarity. I wouldn’t even call the idea a conclusion as much as a way to describe the things I understand in a slightly more clear way. I still don’t particularly like ends-in-themselves.
Reduction time:
Why should I pursue clarity or donate to effective charities that are sub-optimal happiness-maximizers?
Because those are instrumental values.
Why should I pursue these instrumental values?
Because they lead to happiness and goodness.
Why should I pursue happiness and goodness?
Because they’re terminal values.
Why should I pursue these terminal values?
Wrong question. Terminal values, by definition, are ends-in-themselves. So here the real question is not why should I, but rather, why do I pursue them? It’s because the alien-god of evolution gave us emotions that make us want to be happy and good...
Why did the alien-god give us emotions?
The alien-god does not act rationally. There is no “why.” The origin of emotion is the result of random chance. We can explain only its propogation.
Why should we be controlled by emotions that originated through random chance?
Wrong question. It’s not a matter of whether they should control us. It’s a fact that they do.
I pretty much agree. But I have one quibble that I think is worth mentioning. Someone else could just say, “No, that’s not what morality is. True morality is...”.
Actually, let me give you a chance to respond to that before elaborating. How would you respond to someone who says this?
Reduction time:
Very very well put. Much respect and applause.
One very small comment though:
The origin of emotion is the result of random chance.
I see where you’re coming from with this. If someone else heard this out of context they’d think, “No… emotion originates from evolutionary pressure”. But then you’d say, “Yeah, but where do the evolutionary pressures come from”. The other person would say, “Uh, ultimately the big bang I guess.” And you seem to be saying, “exactly, and that’s the result of random chance”.
Some math-y/physicist-y person might argue with you here about the big bang being random. I think you could provide a very valid bayesian counter argument saying that probability is in the mind, and that no one has a clue how the big bang/origin came to be, and so to anyone and everyone in this world, it is random.
Yeah, I have no clue what evolutionary pressure means, or what the big-bang is, or any of that science stuff yet. sigh I really don’t enjoy reading hard science all that much, but I enjoy ignorance even less, so I’ll probably try to educate myself more about that stuff soon after I finish the rationality book.
Ok, that’s perfectly fair. My honest opinion is that it really isn’t very practical and if it doesn’t interest you, it probably isn’t worth it. The value of it is really just if you’re curious about the nature of reality on a fundamental level. But as far as what’s practical, I think it’s skills like breaking things down like a reductionist, open mindedness, knowledge of what biases we’re prone to etc.
Yeah, I guess one person has only so much time… at least for now… I am curious, but maybe not quite enough to justify the immense amount of time and effort it would take me to thoroughly understand.
I pretty much agree. But I have one quibble that I think is worth mentioning. Someone else could just say, “No, that’s not what morality is. True morality is...”.
Example case:
True morality is following God’s will? Basically everyone who says this believes “God wants what’s best for us, even when we don’t understand it.” Their understanding of God’s will and their intuitive idea of what’s best for people rarely conflict though. But here’s an extreme example of when it could: Let’s say someone strongly believes (even in belief) in God, and for some reason thinks that God wants him to sacrifice his child. This action would go against his (unrecognized) terminal value of goodness, but he could still do it, subconsciously satisfying his (unrecognized) terminal value of personal happiness. He takes comfort in his belief in God and heaven. He takes comfort in his community. To not sacrifice the child would be to deny God and lose that comfort. These thoughts obviously don’t happen on a conscious level, but they could be intuitions?
Idk, feel free to throw more “true morality is...” scenarios at me...
Their understanding of God’s will and their intuitive idea of what’s best for people rarely conflict though.
What if it does conflict? Does that then change what morality is?
And to play devils advocate, suppose the person says, “I don’t care what you say, true morality is following God’s will no matter what the effect is on goodness or happiness.” Hint: they’re not wrong.
I hope I’m not being annoying. I could just make my point if you want.
But it seems like morality is just a word people use to describe how they think they should act! People think they should act in all sorts of ways, but it seems to me like they’re subconsciously acting to achieve happiness and/or goodness.
As for your quote… such a person would be very rare, because almost anyone who defines morality as God’s will believes that God’s will is good for humanity, even if she doesn’t understand why. This belief, and acting in accordance with it, brings her happiness in the form of security. I don’t think anyone says to herself “God has an evil will, but I will serve him anyway.” Do you?
But it seems like morality is just a word people use to describe how they think they should act!
It often is. My point is that morality is just a word, and that it unfortunately doesn’t have a well agreed upon meaning. And so someone could always just say “but I define it this way”.
And so to ask what morality is is really just asking how you define it. On the other hand, asking what someone’s altruism or preference ratios are is a concrete question.
You seem to be making the point that in practice, peoples definitions of morality usually can be traced back to happiness or goodness, even if they don’t know or admit it. I sense that you’re right.
Do you?
I doubt that there are many people who think that God has an evil will. But I could imagine that there are people who think that “even if I knew that God’s will was evil, following it would still be the right thing to do.”
I doubt that there are many people who think that God has an evil will. But I could imagine that there are people who think that “even if I knew that God’s will was evil, following it would still be the right thing to do.”
Sure. But any definition of “right” that gives that result is more or less baked into in the definition of “God’s will” (e.g. “God’s will is, by definition, right!”), and it’s not the sort of “right” I care about.
And so to ask what morality is is really just asking how you define it.
Yay, I got your point. Morality is definitely a more ambiguous term. You’ve helped me realize I shouldn’t use it synonymously with goodness.
You seem to be making the point that in practice, peoples definitions of morality usually can be traced back to happiness or goodness, even if they don’t know or admit it.
Yes, my point exactly.
But I could imagine that there are people who think that “even if I knew that God’s will was evil, following it would still be the right thing to do.”
I am trying really hard to imagine these people, and I can’t do it. Even if God’s will includes “justice” and killing anyone who doesn’t believe, even if it’s a baby whose only defect is “original sin,” people will still say that this “just” will of God’s is moral and right.
The way I’m (operationally) defining Preferences and words like happy/utility, Preferences are by definition what provides us what the most happiness/utility. Consider this thought experiment:
You start off as a blank slate and your memory is wiped. You then are experience some emotion, and you experience this emotion to a certain magnitude. Let’s call this “emotion-magnitude A”.
You then experience a second emotion-magnitude—emotion-magnitude B. Now that you have experienced two emotion-magnitudes, you could compare them and say which one was more preferable.
You then experience a third emotion magnitude, and insert it into the list [A, B] according to how preferable it was. And you do this for a fourth emotion-magnitude. And a fifth. Until eventually you do it for every possible emotion-magnitude (aka conscious state aka mind-state). You then end up with a list of every possible emotion-magnitude ranked according to desirability. [1...n]. These, are your Preferences.
So the way I’m defining Preferences, it refers to how desirable a certain mind-state is relative to other possible mind-states.
Now think about consequentialism and how stuff leads to certain consequences. Part of the consequences is the mind-state it produces for you.
Say that:
Action 1 → mind-state A
Aciton 2 → mind-state B
Now remember mind-states could be ranked according to how preferable they are, like in the thought experiment. Suppose that mind-state A is preferable to mind-state B.
From this, it seems to me that the following conclusion is unavoidable:
Action 1 is preferable to action 2.
In other words, Action 1 leads you to a state of mind that you prefer over the state of mind that Action 2 leads you to. I don’t see any ways around saying that.
To make it more concrete, let’s say that Action 1 is “going on vacation” and Action 2 is “giving to charity”.
IF going on vacation produces mind-state A.
IF giving to charity produces mind-state B.
IF mind-state A is preferable to mind-state B.
THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.
I call this “preferable”, but in this case words and semantics might just be distracting. As long as you agree that “going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to” when the first three bullet points are true, I don’t think we disagree about anything real, and that we might just be using different words for stuff.
Thoughts?
Don’t you wonder why a rational human being would choose terminal goals that aren’t?.
I do, but mainly from a standpoint in being interested in human psychology. I also wonder from a standpoint of hoping that terminal goals aren’t arbitrary and that they have an actual reason for choosing what they choose, but I’ve never found their reasoning to be convincing, and I’ve never found their informational social influence to be strong enough evidence for me to think that terminal goals aren’t arbitrary.
So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an “altruism mutation” and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It’s a pleasant thought, anyway.
:))) [big smile] (Because I hope what I’m about to tell you might address a lot of your concerns and make you really happy.)
I’m pleased to tell you that we all have “that altruism mutation”. Because of the way evolution works, we evolve to maximize the spread of our genes.
So imagine that there’s two Mom’s. They each have 5 kids, and they each enter an unfortunate situation where they have to choose between themselves an their kids.
Mom 1 is selfish and chooses to save herself. Her kids then die. She goes on to not have any more kids. Therefore, her genes don’t get spread at all.
Mom 2 is unselfish and chooses to save her kids. She dies, but her genes live on through her kids.
The outcome of this situation is that there are 0 organisms with selfish genes, and 5 with unselfish genes.
And so humans (and all other animals, from what I know) have evolved a very strong instinct to protect their kin. But as we know, preference ratios diminish rapidly from there. we might care about our friends and extended family, and a little less about our extended social group, and not so much about the rest of people (which is why we go out to eat instead of paying for meals for 100s of starving kids).
As far as evolution goes, this also makes sense. A mom that acts altruistically towards her social circle would gain respect, and the tribes respect may lead to them protecting that mom’s children, thus increasing the chances they survive and produces offspring themselves. Of course, that altruistic act by the mom may decrease her chances of surviving to produce more offspring and to take her of her current offspring, but it’s a trade-off.* On the other hand, acting altruistically towards a random tribe across the world is unlikely to improve her children’s chances of surviving and producing offspring, so the mom’s that did this have historically been less successful at spreading genes than the mom’s that didn’t.
*Note: using mathematical models to simulate and test these trade-offs is the hard part of studying evolution. The basic ideas are actually quite simple.
But honestly, I literally didn’t even know what evolution was until several weeks ago though
I’m really sorry to hear that. I hope my being sorry isn’t offensive in any way.
so I don’t really belong bringing up any science at all yet;
Not so! Science is all about using what you do know to make hypothesis about the world and to look for observable evidence to test them. And that seems to be exactly what you were doing :)
Your hypotheses and thought experiments are really impressive. I’m beginning to suspect that you do indeed have training and are denying this in order to make a status play. [joking]
Like one human was born with an “altruism mutation” and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios?
I’d just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).
You seem to be saying that the mutation would spread because the organism remains alive. Think about it—if an organism has a mutation that increases the chances that it remain alive but that doesn’t increase the chances of having viable offspring, then that mutation would only remain in the gene pool until he died. And so of all the bajillions of our ancestors, only the ones still alive are candidates for the type of evolution you describe (mutations that only increase your chance of survival).
Note: I’ve since realized that you may know this already, but figured I’d keep it anyway.
Okay, yeah, I should have added the word some. Kaczynski is the only psychopath I’ve really read much about, so maybe I really did extrapolate his seeming rationality onto other psychopaths, even though we probably never hear about 99% of them. That would have to be some kind of bias; out of curiosity how would you label it? Maybe survivorship bias? Or availability heuristic? Anchoring? Or maybe even all of the above?
Believe me, I know. Even without trying to save money, I actually end up spending less on myself (excluding having paid for college) than on charity. Free hobbies are great. I didn’t mean a pension was a reason to become a detective; it would just be a nice perk. Thanks for the link, though. Lots of good articles on that site!
Well, I’m biased in favor of this idea, since I have an awful memory, but a pretty good ability (sometimes too good for my own good) to break things down like a reductionist and dissolve topics. I’ll check out your post tomorrow and try to give some feedback.
I think so too!
Nope, there’s really not, but another thing I’ve realized from reading SSC is that a major component of great writing (and teaching) is the sharing of relevant, interesting, relatable examples to help an idea. If you skillfully parse through an idea, the audience will probably understand it at the time. But if you want the idea to actually sink in and stick with them, great examples are key. This is one reason I like Scott’s posts so much; they actually affect my life. Personally, I was borderline cocky when I was younger (but followed social norms and concealed it). Then, I got older and started to read more and more, moved to the Bay Area, and met loads of smart people. Because of this, my self-esteem began to plummet, but I read that article just in time to stabilize it at a healthy, realistic level.
Anyway, Scott allows people to go easy on themselves for contributing less to the world than they might like, relative to their innate ability. Can we also go easy on ourselves relative to innate conscientiousness?
Yeah, this is sooo real. On a logical level, it’s easy to recognize my scope insensitivity. On a “feeling” level, I still don’t feel like I have to go out and do something about it. But I don’t want to admit my preference ratios are that far out of whack; I don’t want to be that selfish. Ugh. Now I feel like I should do something ambitious again, I’m so waffley about this. Thanks for all the help thinking through everything. This is BY FAR the best guidance anyone has ever given me in my life.
No… sorry, I was just working through my first thoughts about the idea, not making a meaningful point. Continuing on the selfishness idea, all I meant was that the researchers themselves would surely die eventually without AI, so even if AI made the world end a few years earlier for them, they personally have nothing to lose relative to what they could gain (dying a few years earlier vs. living forever). My first thought was “that’s selfish, in a bad way, since they care less than the bajillions of still unborn people would about whether humans go extinct” but then I extrapolated the idea that the researcher would die without AI to the idea that humanity would eventually go extinct without AI and decided it was selfish in a good way.
Anyway, another question for you. You know how you said we care only about our own happiness? Have you read the part of the sequences/rationality book where Eliezer brings up someone being willing to die for someone else? If so, what did you make of it? If not, I’ll go back and find exactly where it was.
I don’t know too much about him other than the basics (“he argued that his bombings were extreme but necessary to attract attention to the erosion of human freedom necessitated by modern technologies requiring large-scale organization”).
I think that his concerns are valid, but I don’t see how the bombings help him achieve the goal of bumping humanity off that path. Perhaps he knew he’d get caught and his manifesto would get attention, but a) there’s still a better way to achieve his goals, and b) he should have realized that people have a strong bias against serial killers.
The reason I think his concerns are valid is because capitalism tries to optimize for wanting, which is sometimes quite different from liking. And anecdotally, this seems to be a big problem.
I’m not sure what the bias is called :/. I know it exists and there’s a formal name though. I know because I remember someone calling me out on it LWSH :)
Yes, I very much agree. At times I think the articles on LW fail to do this. Humans need to have their System 1′s massaged in order to understand things intuitively.
Idk. This seems to be a question involving terminal goals. Ie. if you’re asking whether our innate conscientiousness makes us “good” or “bad”.
When I think of morality this is the/one question I think of: “What are the rules we’d ask people to follow in order to promote the happiest society possible?”. I’m sure you could nitpick at that, but it should be sufficient for this conversation. Example: the law against killing is good because if we didn’t have it, society would be worse off. Similarly, there are norms of certain preference ratios that lead to society being better off.
I don’t think we’d be better off if the norm was to have, say equal preference ratios for everyone in the world. Doing so is very unnatural would be very difficult, if not impossible. You have to weigh the costs of going against our impulses against the benefits that marginal conscientiousness would bring.
I’m not sure where the “equilibrium” points are. Honestly, I think I’d be lying to myself if I said that a preference ratio of 1,000,000,000:1 for you over another human would be overall beneficial to society. I suspect that subsequent generations will realize this and look at us in a similar way we look at Nazis (maybe not that bad, but still pretty bad). Morality seems to “evolve” from generation to generation.
Personally, my preference ratios are pretty bad. Not as bad as the average person because I’m less scope insensitive, but still bad. Ex. I eat out once in a while. You might say “oh well that’s reasonable”. But I could eat brown rice and frozen vegetables for very cheap and be like 70% as satisfied, and pay for x meals for people that are quite literally starving.
But I continue to eat out once in a while, and honestly, I don’t feel (that) bad about it. Because I accept that my preference ratios are where they are (pretty much), and I think it makes sense for me to pursue the goal of achieving my preferences. To be less precise and more blunt, “I accept that I’m selfish”.
And so to answer your question:
I think that the answer is yes. Main reason: because it’s unreasonable to expect that you change your ratios much.
It’s great that you understand it on a logical level. No one has made much progress on the feeling level. As long as you’re aware of the bias and make an effort to massage your “feeling level” towards being more accurate, you should be fine.
Why?
I think that answering that exploring and answering that question will be helpful.
Try thinking about it in two ways:
1) A rational analysis of what you genuinely think makes sense. Note that rational does not mean completely logically.
2) An emotional analysis of what you feel, why you feel it, and in the event that your feelings aren’t accurate, how can you nudge them to be more accurate.
Wow! Thanks for letting me know. I’m really happy to help. I’ve been really impressed with your ability to pursue things, even when it’s uncomfortable. It’s a really important ability and most people don’t have it.
I think that not having that ability is often a bottleneck that prevents progress. Ex. an average person with that ability can probably make much more progress than a high IQ person without it (in some ways). It’s nice to have a conversation that actually progresses along nicely.
I think I have. I remember it being one of the few instances where it seemed to me that Eliezer was misguided. Although:
1) I remember going through it quickly and not giving it nearly as much thought as I would like. I’m content enough with my current understanding, and busy enough with other stuff that I chose to put it off until later. Although I do notice confusion—I very well may just be procrastinating.
2) I have tremendous respect for Eliezer. And so I definitely take note of his conclusions. The following thoughts are a bit dark and I hesitate to mention them… but:
a) Consider the possibility that he does actually agree with me, but he thinks that what he wrote will have a more positive impact on humanity (by influencing readers)
b) In the case that he really does believe what he writes, consider that it may not be best to convince him otherwise. Ie. he seems to be a very influential person in the field of FAI, and it’s very much in humanities interest for that person to be unselfish.
I haven’t thought this through enough to make these points public, so please take note of that. Also, if you wouldn’t mind summarizing/linking to where and why he disagrees with me, I’d very much appreciate it.
Edit: Relevant excerpt from HPMOR
Sorry, I feel like I’m linking to too many things which probably feels overwhelming. Don’t feel like you have to read anything. Just thought I’d give you the option.
Yeah, this was irrational. He should have remembered his terminal value of creating change instead of focusing on his instrumental value of getting as many people as possible to read his manifesto. -gives self a little pat on back for using new terminology-
Could you please elaborate on this idea a little? Anyway, thanks for the link (don’t apologize for linking so much, I love the links and read through and try to digest about 80% of them...). The liking/wanting difference is intuitive, but actually putting it into words is really helpful. I’m interested in exactly how you tie it in with Kaczynski, and I also think it’s relevant to my current dilemma.
Anyway, Scott’s example about smoking makes it seem as if people want to smoke but don’t like it. I think it’s the opposite; they like smoking, but don’t want to smoke. Do I really have these two words backwards? We need definitions. I think “liking” has more to do with your preferences, while “wanting” has to do with your goals. I recognize in myself, that if I like something, it’s very hard for me not to want it, and personally I find matrix-type philosophy questions to actually be difficult. That’s why I’ve never tried smoking; I was scared I might like it and start to want it. Without having tried it, it’s easy to say that it’s not what I want for myself. Is this only because I think it would bring me less happiness in the long run? I don’t think so. Even if you told me with certainty that smoking (or drugs) feels so incredibly good and is so incredibly fun that it could bring me happiness that outweighs the unhappiness caused by the bad stuff, I still wouldn’t want it! And I have no idea why. Which makes me wonder… what if I had never experienced how wonderful a fun-filled mostly-hedonic lifestyle is? Would I truly want it? Or am I just addicted?
Funny that you mention this example; I wouldn’t say it’s reasonable. Let me share a little story. When I was way younger, maybe 10 years ago, I went through a brief phase where I tried to convince my friends and family that eating at restaurants was wrong, saying “What if there were children in pain from starvation right outside the restaurant, and you knew the money you would spend in the restaurant could buy them rice and beans for two weeks… you would feel guilty about eating at the restaurant instead of helping, right? (“yes”) This is your conscience, right? (“yes”) Your conscience is from God, right? (“yes”) People in Africa are just as important as people in the US, right? (“yes”) Therefore, isn’t wrong to eat at a restaurant instead of donating the money to help starving kids in Africa? (“no”) Why? (“it just isn’t!”)… at which point they would insist that if I truly believed this was wrong, I should act accordingly, and I just told them “No, I can’t, I’m too selfish… and besides, saving eternal souls is more important than feeding starving children.” Then I looked at all the smart, unselfish adults I knew who still ate at restaurants, told myself I must be wrong somehow, and avoided thinking about the issue until we read Singer’s Famine, Affluence, and Morality in college (In my final semester, this was the class where it first occurred to me that there was nothing wrong with putting effort into school beyond what was necessary for perfect grades). I was really excited when we read it and was eagerly anticipating discussing it the next class to finally hear if someone could give a solid refutation of my old idea. My professor cancelled class that day, and we never went back to the topic. I cared, but unfortunately not quite enough to go talk to my professor outside of class. That was for nerds. So I went on believing it was “wrong” to eat in restaurants, but to protect my sanity, didn’t think about it or do anything about it, even after de-converting from Christianity… until I came across Scott’s post Nobody Is Perfect, Everything is Commensurable which seems incredibly obvious in hindsight, yet was exactly what I needed to hear at the time.
I disagree. I think we would be better off if society could somehow advance to a stage where such unselfishness was the norm. Whether this is possible is another question entirely, but I keep trying to rid myself of the habit of thinking natural = better (personally, I see this habit as another effect of Christianity; I’m continually amazed to find just how much of my worldview it shaped).
I want to answer this question with “because emotion!” Is this allowed? Or is it akin to “explaining” something by calling it an emergent phenomenon?
1) Rationally, I can’t trace this back any farther than calling it a feeling. Was I born with this feeling? Is it the result of society? I don’t know. I don’t honestly think unselfish preference ratios would lead to a personal increase in my overall happiness, that’s for sure. Take effective altruism, for example. When I donate money, I don’t feel warm and fuzzy. I get a very small amount of personal satisfaction, societal respect, and a tiny reduction in the (already very small) guilt I feel for having such a good life. But honestly I rarely think about it, and I’m 99.99% sure the overall impact on my happiness is much smaller than if I were to use the money to fly to Guatemala and take a few weeks’ vacation to visit old friends. Yet, even as I acknowledge this, I still want to donate. I don’t know why. So I think that based solely on my intuition here, I might disagree with you and find personal happiness and altruism to be two separate terminal goals, often harmonious but sometimes conflicting.
2) Analyze emotion?? Can you do that?! As an istp, just identifying emotion is difficult enough.
As for your points about Eliezer...
a) Yeah, I have considered this too. But I think most of his audience is rational enough that if he said something that wasn’t rational, his credibility could take a hit. Whether this would stop him and how much of a consequentialist he really is, I have no idea.
b) Yeah, this is an interesting microcosm of the issue of whether we want to believe what is true vs. what is best for society. That said, I’m not saying Eliezer is wrong. My intuition does take his side now, but I usually don’t trust my intuitions very much.
Anyway, I went back through the book and found the title of the post. It’s Terminal Values and Instrumental Values. You can jump to “Consider the philosopher.”
Good quote! Right now, I interpret this as showing how personal happiness and “altruism/not becoming a Dark Lord” are both inexplicable, perhaps sometimes competing terminal values… how do you interpret it?
Sure!
In brief: Kaczynski seems to have realized that economies are driven by wanting, not liking, and that this will lead to unhappiness. I think that that conclusion is too strong though—I’d just say that it’ll lead to inefficiency.
Longer explanation: ok, so the economy is pretty much driven by what people choose to buy, and where people choose to work. People aren’t always so good at making these choices. One reason is because they don’t actually know what will make them happy.
Example: job satisfaction is important. There are lots of subtle things that influence job satisfaction. For example, there’s something about things like farming that produces satisfaction and contentment. People don’t value these things enough → these jobs disappear → people miss out on the opportunity to be satisfied and content.
Another reason why people aren’t good at making choices is because they don’t always have the willpower to do what they know they should.
Example: if people were smart, McDonalds wouldn’t be the huge empire that it is. People choose to eat at McDonalds because they don’t weigh the consequences it has on their future selves enough. The reason why McDonalds is huge is because tons of people make these mistakes. If people were smart, MealSquares and McDonalds would be flip-flopped.
Kaczynski seems to focus more on the first example, but I think they’re both important. Economies are driven by the decisions we make. Given the predictable mistakes people make, society will suffer in predictable ways. Kaczynski seems to have realized this.
I avoided using the terms “wanting” and “liking” on purpose. I’ll just say quickly words are just symbols that refer to things and as long as the two people are using the same symbol-thing mappings, it doesn’t matter. What’s important is that you seem to understand the distinction between the two things as far as wanting/liking goes. I do see what you mean about the term “wanting”, and now that I think about it I agree with you.
(I’ve avoided elaboration and qualifiers in favor of conciseness and clarity. Let me know if you want me to say more.)
Edit: I’m about 95% sure that there’s actual neuroscience research behind the wanting vs. liking thing. Ie. they’ve found distinct a brain area that corresponds to wanting, and they’ve found a different distinct brain area that corresponds to liking.
Note: I studied neuroscience in college. I did research in a lab where we studied vision in monkeys, and part of this involved stimulating the monkeys brain. There was a point where we were able to get the monkey to basically make any eye movement we want (based on where and how much we stimulated). It didn’t provide me with any new information as far as free will goes, but literally seeing it in person with my own eyes influenced me on an emotional level.
Interesting, I’ve never smoked, drank or done any drugs at all for similar reasons. Well, that’s part of the story.
I’m going to guess that the reason why you wouldn’t want to do drugs even if you knew they’d make you happy is because a) it’d sort of numb you away from thinking critically and making decisions, and b) you wouldn’t get to do good for the world. Your current lifestyle doesn’t seem to be preventing you from doing either of those.
:) I’ve proposed the same thought experiment except with buying diamonds. Eg. “Imagine that you go to the diamond store to buy a diamond, and there were x thousand starving kids in the parking lot who you could save if you spent the money on them instead. Would you still buy the diamond?”
And in the case of diamonds, it’s not only a) the opportunity cost of doing good with the money—it’s that b) you’re supporting an inhumane organization and c) you’re being victim to a ridiculous marketing scheme that gets you to pay tens of thousands of dollars for a shiny rock. The post Diamonds are Bullshit on Priceonomics is great.
Furthermore, people do a, b and c in the name of love. To me, that seems about as anti-love as it gets. Sorry, this is a pet peeve of mine. It’s amazing how far you could push a human away from what’s sensible. If I had an online dating profile, I think it’d be, “If you still think you’d want a diamond after reading this, then I hate you. If not, let’s talk.”
I know I haven’t acknowledged the main counterargument, which is that the sacrifice is a demonstration of commitment, but there are ways of doing that without doing a, b and c.
That sort of thinking baffles me as well. I’ve tried to explain to my parents what a cost-benefit analysis is… and they just don’t get it. This post has been of moderate help to me because I understood what virtue ethics are after reading it (and I never understood what it is before reading it)
People who say “it just isn’t” don’t think in terms of cost-benefit analyses. They just have ideas about what is and isn’t virtuous. As people like us have figured out, if you follow these virtues blindly, you’ll run into ridiculousness and/or inconsistency.
However, this isn’t to say that virtue-driven thinking doesn’t have it’s uses. Like all heuristics, they trade accuracy for speed, which sometimes is a worthy trade-off.
I’m glad to hear you disagree :) But I sense that I may not have explained what I think and why I think it. If you could just flip a switch and make everyone have equal preference ratios, I think that’d probably be a good thing.
What I’m trying to say is that there is no switch, and that making our preference ratios more equal would be very difficult. Ex. try to make yourself care as much about a random accountant in China as much as you do about, say your Aunt. As far as cost-benefit analysis goes, the effort and unease of doing this would be a cost. I sense that the costs aren’t always worth the benefits, and that given this, it’s socially optimal for us to accept our uneven preference ratios to some extent. Thoughts?
I interpret it as “Harry seems to think there are good reasons for choosing certain terminal values. Terminal values seem arbitrary to me.”
Nope, your longer explanation was perfect, and now I understand, thanks. I’m just a little curious why you would say those things lead to inefficiency instead of unhappiness, but you don’t have to elaborate any more here unless you feel like it.
Again, now I’m slightly curious about the rest of it...
Good guess. You’re right. But (I initially thought) smoking would hardly prevent those things, and I still don’t want to smoke. Then again, addiction could interfere with a), and the opportunity cost of buying cigarettes could interfere with b).
No way! A while back, I facebook-shared a very similar link about the ridiculousness of the diamond marketing scheme and proposed various alternatives to spending money on a diamond ring. I wasn’t even aware that the organization was inhumane.. yikes, information like that should be common knowledge. Also, probably at least some people don’t really want to get a diamond ring… but by the time the relationship gets serious, they can’t get themselves to bring it up (girls don’t want to be presumptuous, guys don’t want to risk a conflict?) so yeah, definitely a good kind of thing to get out of the way in a dating profile, haha.
Wow, that’s so interesting, I’d never heard of virtue ethics before. I have many thoughts/questions about this, but let’s save that conversation for another day so my brain doesn’t suffer an overuse injury. My inner virtue-ethicist wants to become a more thoughtful person, but I know myself well enough to know that if I dive into all this stuff head first, it will just end up to be “a weird thinking phase I went through once” and instrumentally, I want to be thoughtful because of my terminal value of caring about the world. (My gut reaction: Virtues are really just instrumental values that make life convenient for people whose terminal values are unclear/intimidating. (Like how the author of the link chose loyalty as a virtue. I bet we could find a situation in which she would abandon that loyalty.) But I also think that there’s a place for cost-benefit analysis even within virtue ethics, and that virtue ethicists with thoughtfully-chosen virtues can be more efficient consequentialists, which probably doesn’t make much sense, but I’d like to be both, please!)
Oh, yeah, that makes sense to me. Kind of like capitalism, it seems to work better in practice if we just acknowledge human nature. But gradually, as a society, we can shift the preference ratios a bit, and I think we maybe are. :) We can point to a decrease in imperialism, the budding effective altruism movement, or even veganism’s growing popularity as examples of this shifting preference ratio.
I didn’t mean anything deep by that. Inefficiency just means “less than optimal” (or at least that’s what I mean by it). For him to say that it will lead to actual unhappiness would mean that the costs are so great that they overcome any associated benefits and push whatever our default state is down until it reaches actual unhappiness. I suspect that the forces aren’t strong enough to push us too far off our happiness “set points”.
Just did a write up here. How convenient.
Yeah, it is. Check out the movie Blood Diamond and the song Conflict Diamonds. Not the most formal sources, but at least it’ll be entertaining :)
It seems that you don’t want to think about this now. If you end up thinking about it in the future, let me know—I’d love to hear your thoughts!
I like your point about being afraid/ashamed to do something and the two cases in general and with regard to drinking as a social lubricant.
I’ll post my drinking experience over there too, though I don’t have too much to say.
Haha, ok
How convenient. I thought about it a bit more after all. I actually still like my initial idea of virtues being instrumental values. I commented on the link you sent me, but a lot of my comment is similar to what I commented here yesterday…
As a consequentialist, that’s how I’m inclined to think of it too. But I think it’s important to remember that non-consequentialists actually think of virtues as having intrinsic value. Of being virtuous.
For reference:
You:
Also:
Absolutely! That’s how I’d start off. But the question I was getting at is “why does your brain produce those emotions”. What is the evolutionary psychology behind it? What events in your life have conditioned you to produce this emotion?
By default, I think it’s natural to give a lot of weight to your emotions and be driven by them. But once you really understand where they come from, I think it’s easier to give them a more appropriate weight, and consequently, to better achieve your goals. (1,2,3)
And you could manipulate your emotions too. Examples: You’ll be less motivated to go to the gym if you lay down on the couch. You’ll be more motivated to go to the gym if you tell your friends that you plan on going to the gym every day for a month.
So you don’t think terminal goals are arbitrary? Or are you just proclaiming what yours are?
Edit:
Are you sure that this has nothing to do with maximizing happiness? Perhaps the reason why you still want to donate is to preserve an image you have of yourself, which presumably is ultimately about maximizing your happiness.
(Below is a thought that ended up being a dead end. I was going to delete it, but then I figured you might still be interested in reading it.)
Also, an interesting thought occurred to me related to wanting vs. liking. Take a person who starts off with only the terminal goal of maximizing his happiness. Imagine that the person then develops an addiction, say to smoking. And imagine that the person doesn’t actually like smoking, but still wants to smoke. Ie. smoking does not maximize his happiness, but he still wants to do it. Should he then decide that smoking is a terminal goal of his?
I’m not trying to say that smoking is a bad terminal goal, because I think terminal goals are arbitrary. What I am trying to say is that… he seems to be actually trying to maximize his happiness, but just failing at it.
DEAD END. That’s not true. Maybe he is actually trying to maximize his happiness, maybe he isn’t. You can’t say whether he is or he isn’t. If he is, then it leads you to say “Well if your terminal goal is ultimately to maximize your happiness… then you should try to maximize your happiness (if you want to achieve your terminal goals).” But if he isn’t (just) trying to maximize happiness, he could add in whatever other terminal goals he wants. Deep down I still notice a bit of confusion regarding my conclusion that goals are arbitrary, and so I find myself trying to argue against it. But every time I do I end up reaching a dead end :/
Thank you! That does seem to be a/the key point in his article. Although “I value the choice” seems like a weird argument to me. I never thought of it as a potential counter argument. From what I can gather from Eliezer’s cryptic rebuttal, I agree with him.
I still don’t understand what Eliezer would say to someone that said, “Preferences are selfish and Goals are arbitrary”.
1- Which isn’t to imply that I’m good at this. Just that I sense that it’s true and I’ve had isolated instances of success with it.
2 - And again, this isn’t to imply that you shouldn’t give emotions any weight and be a robot. I used to be uncomfortable with just an “intuitive sense” and not really understanding the reasoning behind it. Reading How We Decide changed that for me. 1) It really hit me that there is “reasoning” behind the intuitions and emotions you feel. Ie. your brain does some unconscious processing. 2) It hit me that I need to treat these feelings as Bayesian evidence and consider how likely it is that I have that intuition when the intuition is wrong vs. how likely it is that I have the intuition when the intuition is right.
3 - This all feels very “trying-to-be-wise-sounding”, which I hate. But I don’t know how else to say it.
Oops, just when I thought I had the terminology down. :( Yeah, I still think terminal values are arbitrary, in the sense that we choose what we want to live for.
So you think our preference is, by default, the happiness mind-state, and our terminal values may or may not be the most efficient personal happiness-increasers. Don’t you wonder why a rational human being would choose terminal goals that aren’t? But we sometimes do. Remember your honesty in saying:
I have an idea. So based on biology and evolution, it seems like a fair assumption that humans naturally put ourselves first, all the time. But is it at all possible for humans to have evolved some small, pure, genuine concern for others (call it altruism/morality/love) that coexists with our innate selfishness? Like one human was born with an “altruism mutation” and other humans realized he was nice to have around, so he survived, and the gene is still working its way through society, shifting our preference ratios? It’s a pleasant thought, anyway.
But honestly, I literally didn’t even know what evolution was until several weeks ago though, so I don’t really belong bringing up any science at all yet; let me switch back to personal experience and thought experiments.
For example, let’s say my preferences are 98% affected by selfishness and maybe 2% by altruism, since I’m very stingy with my time but less so with my money. (Someone who would die for someone else would have different numbers.) Anyway, on the surface I might look more altruistic because there is a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I’m 100% selfish. When I donate to effective charities, I do receive benefits like liking myself a bit more, real or perceived respect from the world, a small burst of fuzzy feelings, and a decrease in the (admittedly small) amount of personal guilt I feel about the world’s unfairness. But if I had to put a monetary value on the happiness return from a $1000 donation, it would be less than $1000. When I use a preference ratio and prefer other people’s happiness, their happiness does make me happy, but there isn’t a direct correlation between how happy it makes me and the extent to which I prefer it. So maybe preference ratios can be based mostly on happiness, but are sometimes tainted with a hint of genuine altruism?
Also, what about diminishing marginal returns with donating? Will someone even feel a noticable increase in good feelings/happiness/satisfaction giving 18% rather than 17%? Or could someone who earns 100k purchase equal happiness with just 17k and be free to spend the extra 1k on extra happiness in the form of ski trips or berries or something (unless he was the type to never eat in restaurants)? Edit: nevermind this paragraph, even if it’s realistic, it’s just scope insensitivity, right?
But similarly, let’s say someone gives 12% of her income. Her personal happiness would probably be higher giving 10% to AMF and distributing 2% in person via random acts of kindness than it would giving all 12% to AMF. Maybe, you’re thinking that this difference would affect her mind-state, that she wouldn’t be able to think of himself as such a rational person if she did that. But who really values their self-image of being a rational opportunity-cost analyzer that highly? I sure don’t (well, 99.99% sure anyway).
Sooo could real altruism exist in some people and affect their preference ratios just like personal happiness does, but to a much smaller extent? Look at (1) your quote about your ambition (2) my desire to donate despite my firm belief that the happiness opportunity cost outweighs the happiness benefits (3) people who are willing to die for others and terminate their own happiness (4) people who choose to donate via effective altruism rather than random acts of kindness
Anyway, if there was an altruism mutation somewhere along the way, and altruism could shape our preferences like happiness, it would be a bit easier to understand the seeming discrepancy between preferences and terminal goals, between likes and wants. Here I will throw out a fancy new rationalist term I learned, and you can tell me if I misunderstand it or am wrong to think it might apply here… occam’s razor?
Anyway, in case this idea is all silly and confused, and altruism is a socially conditioned emotion, I’ll attempt to find its origin. Not from giving to church (it was only fair that the pastors/teachers/missionaries get their salaries and the members help pay for building costs, electricity, etc). I guess there was the whole “we love because He first loved us” idea, which I knew well and regurgitated often, but don’t think I ever truly internalized. I consciously knew I’d still care about others just as much without my faith. Growing up, I knew no one who donated to secular charity, or at least no one who talked about it. The only thing I knew that came close to resembling large-scale altruism was when people chose to be pastors and teachers instead of pursuing high-income careers, but if they did it simply to “follow God’s will” I’m not sure it still counts as genuinely caring about others more than yourself. On a small-scale, my mom was really altruistic, like willing to give us her entire portion of an especially tasty food, offer us her jacket when she was cold too, etc… and I know she wasn’t calculating cost-benefit ratios, haha. So I guess she could have instilled it in me? Or maybe I read some novels with altruistic values? Idk, any other ideas?
I’m no Eliezer, but here’s what I would say: Preferences are mostly selfish but can be affected by altruism, and goals are somehow based on these preferences. Whether or not you call them arbitrary probably depends on how you feel about free will. We make decisions. Do our internal mental states drive these decisions? Put in the same position 100 times, with the same internal mental state, would someone make the same decision every time, or would it be 50-50? We don’t know, but either way, we still feel like we make decisions (well, except when it comes to belief, in my experience anyway) so it doesn’t really matter too much.
The way I’m (operationally) defining Preferences and words like happy/utility, Preferences are by definition what provides us what the most happiness/utility. Consider this thought experiment:
So the way I’m defining Preferences, it refers to how desirable a certain mind-state is relative to other possible mind-states.
Now think about consequentialism and how stuff leads to certain consequences. Part of the consequences is the mind-state it produces for you.
Say that:
Action 1 → mind-state A
Aciton 2 → mind-state B
Now remember mind-states could be ranked according to how preferable they are, like in the thought experiment. Suppose that mind-state A is preferable to mind-state B.
From this, it seems to me that the following conclusion is unavoidable:
In other words, Action 1 leads you to a state of mind that you prefer over the state of mind that Action 2 leads you to. I don’t see any ways around saying that.
To make it more concrete, let’s say that Action 1 is “going on vacation” and Action 2 is “giving to charity”.
IF going on vacation produces mind-state A.
IF giving to charity produces mind-state B.
IF mind-state A is preferable to mind-state B.
THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.
I call this “preferable”, but in this case words and semantics might just be distracting. As long as you agree that “going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to” when the first three bullet points are true, I don’t think we disagree about anything real, and that we might just be using different words for stuff.
Thoughts?
I do, but mainly from a standpoint in being interested in human psychology. I also wonder from a standpoint of hoping that terminal goals aren’t arbitrary and that they have an actual reason for choosing what they choose, but I’ve never found their reasoning to be convincing, and I’ve never found their informational social influence to be strong enough evidence for me to think that terminal goals aren’t arbitrary.
:))) [big smile] (Because I hope what I’m about to tell you might address a lot of your concerns and make you really happy.)
I’m pleased to tell you that we all have “that altruism mutation”. Because of the way evolution works, we evolve to maximize the spread of our genes.
So imagine that there’s two Mom’s. They each have 5 kids, and they each enter an unfortunate situation where they have to choose between themselves and their kids.
Mom 1 is selfish and chooses to save herself. Her kids then die. She goes on to not have any more kids. Therefore, her genes don’t get spread at all.
Mom 2 is unselfish and chooses to save her kids. She dies, but her genes live on through her kids.
The outcome of this situation is that there are 0 organisms with selfish genes, and 5 with unselfish genes.
And so humans (and all other animals, from what I know) have evolved a very strong instinct to protect their kin. But as we know, preference ratios diminish rapidly from there. We might care about our friends and extended family, and a little less about our extended social group, and not so much about the rest of people (which is why we go out to eat instead of paying for meals for 100s of starving kids).
As far as evolution goes, this also makes sense. A mom that acts altruistically towards her social circle would gain respect, and the tribes respect may lead to them protecting that mom’s children, thus increasing the chances they survive and produces offspring themselves. Of course, that altruistic act by the mom may decrease her chances of surviving to produce more offspring and to take care of her current offspring, but it’s a trade-off.* On the other hand, acting altruistically towards a random tribe across the world is unlikely to improve her children’s chances of surviving and producing offspring, so the mom’s that did this have historically been less successful at spreading genes than the mom’s that didn’t.
*Note: using mathematical models to simulate and test these trade-offs is the hard part of studying evolution. The basic ideas are actually quite simple.
I’m really sorry to hear that. I hope my being sorry isn’t offensive in any way. If it is, could you please tell me? I’d like to avoid offending people in the future.
Not so! Science is all about using what you do know to make hypothesis about the world and to look for observable evidence to test them. And that seems to be exactly what you were doing :)
Your hypotheses and thought experiments are really impressive. I’m beginning to suspect that you do indeed have training and are denying this in order to make a status play. [joking]
I’d just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).
You seem to be saying that the mutation would spread because the organism remains alive. Think about it—if an organism has a mutation that increases the chances that it remain alive but that doesn’t increase the chances of having viable offspring, then that mutation would only remain in the gene pool until he died. And so of all the bajillions of our ancestors, only the ones still alive are candidates for the type of evolution you describe (mutations that only increase your chance of survival). Note that evolution is just the process of how genes spread.
Note: I’ve since realized that you may know this already, but figured I’d keep it anyway.
I got a “comment too long error” haha
Okay, I guess I should have known some terminology correction was coming. If you want to define “happiness” as the preferred mind-state, no worries. I’ll just say the preferred mind-state of happiness is the harmony of our innate desire for pleasure and our innate desire for altruism, two desires that often overlap but occasionally compete. Do you agree that altruism deserves exactly the same sort of special recognition as an ultimate motivator that pleasure does? If so, your guess that we might not have disagreed about anything real was right.
Okay...most people want some vacation, but not full-time vacation, even though full-time vacation would bring us a LOT of pleasure. Doing good for the world is not as efficient at maximizing personal pleasure as going on vacation is. An individual must strike a balance between his desire for pleasure and his desire to be altruistic to achieve Harmonious Happiness (Look, I made up a term with capital letters! LW is rubbing off on me!)
Yay!!! I didn’t think of a mother sacrificing herself for her kids like that, but I did think the most selfish, pleasure-driven individuals would quite probably be the most likely to end up in prison so their genes die out and less probably, but still possibly, they could also be the least likely to find spouses and have kids.
I almost never get offended, much less about this. I appreciate the sympathy! But others could find it offensive in that they’d find it arrogant. My thoughts on arrogance are a little unconventional. Most people think it’s arrogant to consider one person more gifted than others or one idea better than others. Some people really are more gifted and have far more positive qualities than others. Some ideas really are better. If you happen to be one of the more gifted people or understand one of the better ideas (evolution, in this case), and you recognize yourself as more gifted or recognize an idea as better, that’s not arrogance. Not yet. That’s just an honest perspective on value. Once you start to look down on people for being less gifted than you are or having worse ideas, that’s when you cross the line and become arrogant. If you are more gifted, or have more accurate ideas, you can happily thank the universe you weren’t born in someone else’s shoes, while doing your best to imagine what life would have been like if you were. You can try to help others use their own gifts to the best of their potential. You can try to share your ideas in a way that others will understand. Just don’t look down on people for not having certain abilities or believing the correct ideas because you really can’t understand what it’s like to be them :) But yeah, if you don’t want to offend people, it’s dangerous to express pity. Some people will look at your “feeling sorry” for those who don’t share your intelligence/life opportunities/correct ideas and call you arrogant for it, but I think they’re wrong to do so. There’s a difference between feeling sorry for people and looking down on them. For example, I am a little offended when one Christian friend and her dad who was my high school Calculus teacher look down on me. Most of my other friends just feel sorry for me, and I would be more offended if they didn’t, because feeling sorry at least shows they care.
I’m flattered!! But I must confess the one thought experiment that was actually super good, the one at the end about free will, wasn’t my idea. It was a paraphrase of this guy’s idea and I had used it in the past to explain my deconversion to my friends. The other ideas were truly original, though :) (Not to say no one else has ever had them! Sometimes I feel like my life is a series of being very pleasantly surprised to find that other people beat me to all my ideas, like how I felt when I first read Famine, Affluence and Morality ten years after trying to convince my family it was wrong to eat in restaurants)
Hey, this sounds like what I was just reading this week in the rationality book about Adaptation-Executers, not Fitness-Maximizers! I think I get this, and maybe I didn’t write very clearly (or enough) here, but maybe I still don’t fully understand. But if someone is nice to have around, wouldn’t he have fewer enemies and be less likely to die than the selfish guys? So he lives to have kids, and the same goes for them? Idk.
Note: I just read your note and now have accordingly decreased the probability that I had said something way off-base :)
I agree that in most cases (sociopaths are an exception) pleasure and doing good for others are both things that determine how happy something makes you. And so in that sense, it doesn’t seem that we disagree about anything real.
But you use romantic sounding wording. Ex. “special recognition as an ultimate motivator”.
So they way motivation works is that it’s “originally determined” by our genes, and “adjusted/added to” by our experiences. So I agree that altruism is one of our “original/natural motivators”. But I wouldn’t say that it’s an ultimate motivator, because to me that sounds like it implies that there’s something final and/or superseding about altruism as a motivator, and I don’t think that’s true.
I’m going to say my original thought, and then I’m going to say how I have since decided that it’s partially wrong of me.
My original thought is that “there’s no such thing as a special motivator”. We could be conditioned to want anything. Ie. to be motivated to do anything. The way I see it, the inputs are our genes and our experiences, and the output is the resulting motivation, and I don’t see how one output could be more special than another.
But that’s just me failing to use the word special as is customary by a good amount of people. One use of the word special would mean that there’s something inherently different about it, and it’s that use that I argue against above. But another way people use it is just to mean that it’s beautiful or something. Ie. even though altruism is an output like any other motivation, humans find that to be beautiful, and I think it’s sensible to use the word special to describe that.
This all may sound a lot like nitpicking, and it sort of is, but not really. I actually think there’s a decent chance that clarifying what I mean by these words will bring us a lot closer to agreement.
True, but that wasn’t the point I was making. I was just using that as an example. Admittedly, one that isn’t always true.
I’m curious—was this earth shattering or just pretty cool? I got the impression that you thought that humans are completely selfish by nature.
And that this makes you sad and that you’d be happier if people did indeed have some sort of altruism “built in”.
I think you may be misunderstanding something about how evolution works. I see that you now understand that we evolve to be “altruistic to our genes”, but it’s a common and understandable error to instinctively think about society as we know it. In actuality, we’ve been evolving very slowly over millions of years. Prisons have only existed for, idk, a couple hundred? (I realize you might understand this, but I’m commenting just in case you didn’t)
Not here they’re not :) And I think that description was quite eloquent.
I used to be bullied and would be sad/embarrassed if people made fun of me. But at some point I got into a fight, ended it, and had a complete 180 shift of how I think about this. Since then, I’ve sort of decided that it doesn’t make sense at all to be “offended” by anything anyone says about you. What does that even mean? That your feelings are hurt? The way I see it:
a) Someone points out something that is both fixable and wrong with you, in which case you should thank them and change it. And if your feelings get hurt along the way, that’s just a cost you have to incur along the path of seeking a more important end (self improvement).
b) Someone points out something about you that is not fixable, or not wrong with you. In that case they’re just stupid (or maybe just wrong).
In reality, I’m exaggerating a bit because I understand that it’s not reasonable to expect humans to react like this all the time.
Haha, I see. Well now I’m less impressed by your intellect but more impressed with your honesty!
Yea, me too. But isn’t it really great at the same time though! Like when I first read the Sequences, it just articulated so many things that I thought that I couldn’t express. And it also introduced so many new things that I swear I would have arrived at. (And also introduced a bunch of new things that I don’t think I would have arrived at)
Yeah, definitely!
Yes, that’s exactly what I meant to imply! Finally, I used the right words. Why don’t you think it’s true?
I did just mean “inherently different” so we’re clear here. I think what makes selfishness and goodness/altruism inherently different is that other psychological motivators, if you follow them back far enough, will lead people to act in a way that they either think will make them happy or that they think will make the world a happier place.
Well, the idea of being completely selfish by nature goes so completely against my intuition, I didn’t really suspect it (but I wouldn’t have ruled it out entirely). The “Yay!!” was about there being evidence/logic to support my intuition being true.
Prisons didn’t exist, but enemies did, and totally selfish people probably have more enemies… so yeah, I understand :)
No, you’re right! Whenever someone says something and adds “no offense” I remark that there must be something wrong with me, because I never take offense at anything. I’ve used your exact explanation to talk about criticism. I would rather hear it than not, because there’s a chance someone recognizes a bad tendency/belief that I haven’t already recognized in myself. I always ask for negative feedback from people, there’s no downside to it (unless you already suffer from depression, or something).
In real life, the only time I feel offended/mildly annoyed by what someone flat-out claims I’m lying, like when my old teacher said he didn’t believe me that I spent years earnestly praying for a stronger faith. But even as I was mildly annoyed, I understood his perspective completely because he either had to disbelieve me or disbelieve his entire understanding of the Bible and a God who answers prayer.
Yeah, ditto all the way! It’s entirely great :) I feel off the hook to go freely enjoy my life knowing it’s extremely probable that somewhere else, people like you, people who are smarter than I am, will have the ambition to think through all the good ideas and bring them to fruition.
I think we’ve arrived at a core point here.
See my other comment:
Back to you:
Oh, I see.
The way I’m defining preference ratios:
Preference ratio for person X = how much you care about yourself / how much you care about person X
Or, more formally, how many units of utility person X would have to get before you’d be willing to sacrificing one unit of your own utility for him/her.
So what does altruism mean? Does it mean “I don’t need to gain any happiness in order for me to want to help you, but I don’t know if I’d help you if it caused me unhappiness.”? Or does it mean “I want to help you regardless of how it impacts my happiness. I’d go to hell if it meant you got one extra dollar.”
[When I was studying for some vocab test in middle school my cards were in alphabetical order at one point and I remember repeating a thousand times—“altruism: selfless concern for others. altruism: selfless concern for others. altruism: selfless concern for others...”. That definition would imply the latter.]
Let’s take the former definition. In that case, you’d want person X to get one unit of utility even if you get nothing in return, so your preference ratio would be 0. But this doesn’t necessarily work in reverse. Ie. in order to save person X from losing one unit of utility, you probably wouldn’t sacrifice a bajillion units of your own utility. I very well might be confusing myself with the math here.
Note: I’ve been trying to think about this but my approach is too simplistic and I’ve been countering it, but I’m having trouble articulating it. If you really want me to I could try, otherwise I don’t think it’s worth it. Sometimes I find math to be really obvious and useful, and sometimes I find it to be the exact opposite.
This depends on the person, but I think that everyone experiences it to some extent.
If the person is trying to maximize happiness, the question is just “how much happiness would a marginal 1k donation bring” vs. “how much happiness would a 1k vacation bring”. The answers to these questions depend on the person.
Sorry, I’m not sure what you’re getting at here. The person might be scope insensitive to how much impact the 1k could have if he donated it.
Yes, the optimal donation strategy for maximizing your own happiness is different from the one that maximizes impact :)
2, 3 and 4 are examples of people not trying to maximize their happiness.
1 is me sometimes knowingly following an impulse my brain produces even when I know it doesn’t maximize my happiness. Sadly, this happens all the time. For example, I ate Chinese food today, and I don’t think that doing so would maximize my long-term happiness.
In the case of my ambitions, my brain produces impulses/motivations stemming from things including:
Wanting to do good.
Wanting to prove to myself I could do it.
Wanting to prove to others I could do it.
Social status.
Brains don’t produce impulses in perfect, or even good alignment with what it expects will maximize utility. I find the decision to eat fast food as an intuitive example of this. But I don’t see how this changes anything about Preferences or Goals.
I’m sorry, I’m trying to understand what you’re saying but I think I’m failing. I think the problem is that I’m defining words differently than you. I’m trying to figure out how you’re defining them, but I’m not sure. Anyway, I think that if we clarify our definitions, we’d be able to make some good progress.
The way I’m thinking about it… think back to my operational definition of preferences in the first comment where I talk about how an action leads to a mind-state. What action leads to what mind-state depends on the person. An altruistic action for you might lead to a happy mind-state, and that same action might lead me to a neutral mind-state. So in that sense altruism definitely shapes our preferences.
I’m not sure if you’re implying this, but I don’t see how this changes the fact that you could choose to strive for any goal you want. That you could only say that a means is good at leading to an end. That you can’t say that and end is good.
Ie. I could chose the goal of killing people, and you can’t say that it’s a bad goal. You could only say that it’s bad at leading to a happy society. Or that it’s bad at making me happy.
That’s a term that I don’t think I have a proper understanding of. There was a point when I realized that it just means that A & B is always less likely than A, unless B = 1. Like let’s say that the probability of A is .75. Even if B is .999999, P(A & B) < P(A). And so in that sense, simpler = better.
But people use it in ways that I don’t really understand. Ie. sometimes I don’t get what they mean by simpler. I don’t see that the term applies here though.
I think it’d be helpful if you defined specifically what you mean by altruism. I mean, you don’t have to be all formal or anything, but more specific would be useful.
As far as socially conditioned emotions goes, our emotions are socially conditioned to be happy in response to altruistic things and sad in response to anti-altruistic things. I wouldn’t say that that makes altruism itself a socially conditioned emotion.
Wow, that’s a great way to put it! You definitely have the head of a scientist :)
Yeah, I pretty much feel that way too.
Yeah, this has gotten a little too tangled up in definitions. Let’s try again, but from the same starting point.
Happiness=preferred mind-state (similar, potentially interchangeable terms: satisfaction, pleasure) Goodness=what leads to a happier outcome for others (similar, potentially interchangeable terms: morality, altruism)
I guess my whole idea is that goodness is kind of special. Most people seem born with it, to one extent or another. I think happiness and goodness are the two ultimate motivators. I even think they’re the only two ultimate motivators. Or at least I can’t think of any other supposed motivation that couldn’t be traced back to one or both of these.
Pursuing a virtue like loyalty will usually lead to happiness and goodness. But is it really the ultimate motivator, or was there another reason behind this choice, i.e. it makes the virtue ethicist happy and she believes it benefits society? I’m guessing that in certain situations, the author might even abandon the loyalty virtue if it conflicted with the underlying motivations of happiness and goodness. Thoughts?
Edit: I guess I’m realizing the way you defined preference doesn’t work for me either, and I should have said so in my other comment. I would say prefer simply means “tend to choose.” You can prefer something that doesn’t lead to the happiest mind-state, like a sacrificial death, or here’s an imaginary example:
You have to choose: Either you catch a minor cold, or a mother and child you will never meet will get into a car accident. The mother will have serious injuries, and her child will die. Your memory of having chosen will be erased immediately after you choose regardless of your choice, so neither guilt nor happiness will result. You’ll either suddenly catch a cold, or not.
Not only is choosing to catch a cold an inefficient happiness-maximizer like donating to effective charities, this time it will actually have a negative effect on your happiness mind-state. Can you still prefer that you catch a cold? According to what seems to me like common real-world usage of “prefer” you can. You are not acting in some arbitrary, irrational, inexplicable way in doing so. You can acknowledge you’re motivated by goodness here, rather than happiness.
In a way, I think this is true. Actually, I should give more credit to this idea—yeah, it’s true in an important way.
My quibble is that motivation is usually not rational. If it was, then I think you’d be right. But the way our brains produce motivation isn’t rational. Sometimes we are motivated to do something… “just because”. Ie. even if our brain knows that it won’t lead to happiness or goodness, it could still produce motivation.
And so in a very real sense, motivation itself is often something that can’t really be traced back. But I try really hard to respond to what people’s core points are, and what they probably meant. I’m not precisely sure what your core point is, but I sense that I agree with it. That’s the strongest statement I could make.
Unfortunately, I think my scientific background is actually harming me right now. We’re talking about a lot of things that have very precise scientific meanings, and in some cases I think you’re deviating from them a bit. Which really isn’t too big a deal because I should be able to infer what you mean and progress the conversation, but I think I’m doing a pretty mediocre job of that. When I reflect, I find it difficult to deviate from the definitions I’m familiar with, which is sort of bad “conversational manners”, because the only point of words in a conversation is to communicate ideas, and it’d probably be more efficient if I were better able to use other definitions.
Haha, you seem to be confused about virtue ethics in a good way :)
A true virtue ethicist would completely and fully believe that their virtue is inherently desirable, independent of anything and everything else. So a true virtue ethicist who values the virtue of loyalty wouldn’t care whether the loyalty lead to happiness or goodness.
Now, I think that consequentialism is a more sensible position, and I think you do too. And in the real world, virtue ethicists often have virtues that include happiness and goodness. And if they run into a conflict between say the virtue of goodness and the one of loyalty, well I don’t know how they’d resolve it, but I think they’d give some weight to each, and so in practice I don’t think virtue ethicists end up acting too crazy, because they’re stabilized by their virtues of goodness and happiness. On the other hand, a virtue ethicist without the virtue of goodness… that could get scary.
I hadn’t thought about it before, but now that I do I think you’re right. I’m not using the word “prefer” to mean what it really means. In my thought experiment I started off using it properly in saying that one mind-state is preferable to another.
But the error I made is in defining ACTIONS to be preferable because the resulting MIND-STATES are preferable. THAT is completely inconsistent with the way it’s commonly used. In the way it’s commonly used, an action is preferable… if you prefer it.
I’m feeling embarrassed that I didn’t realize this immediately, but am glad to have realized it now because it allows me to make progress. Progress feels so good! So...
THANK YOU FOR POINTING THIS OUT!
Absolutely. But I think that I was wrong in an even more general sense than that.
So I think you understood what I was getting at with the thought experiment though—do you have any ideas about what words I should substitute in that would make more sense?
(I think that the fact that this is the slightest bit difficult is a huge failure of the english language. Language is meant to allow us to communicate. These are important concepts, and our language isn’t giving us a very good way to communicate them. I actually think this is a really big problem. The linguistic-relativity hypothesis basically says that our language restricts our ability to think about the world, and I think (and it’s pretty widely believed) that it’s true to some extent (the extent itself is what’s debated).)
Yay, agreement :)
Great point. I actually had a similar thought and added the qualifier “psychological” in my previous comment. Maybe “rational” would be better. Maybe there are still physical motivators (addiction, inertia, etc?) but this describes the mental motivators? Does this align any better with your scientific understanding of terminology? And don’t feel bad about it, I’m sure the benefits of studying science outweigh the cost of the occasional decrease in conversation efficiency :)
Then I think very, very few virtue ethicists actually exist, and virtue ethicism is so abnormal it could almost qualify as a psychological disorder. Like the common ethics dilemma of exposing hidden Jews. If someone’s virtue was “honesty” they would have to. (In the philosophy class I took, we resolved this dilemma by redefining “truth” and capitalizing; e.g. Timmy’s father is a drunk. Someone asks Timmy if his father is a drunk. Timmy says no. Timmy told the Truth.) We whizzed through that boring old “correspondence theory” in ten seconds flat. I will accept any further sympathy you wish to express. Anyway, I think that any virtue besides happiness and goodness will have some loophole where 99% of people will abandon it if they run into a conflict between their chosen virtue and the deeper psychological motivations of happiness and goodness.
Edit: A person with extremely low concern for goodness is a sociopath. The amount of concern someone has for goodness as a virtue vs. amount of concern for personal happiness determines how altruistic she is, and I will tentatively call this a psychological motivation ratio, kind of like a preference ratio. And some canceling occurs in this ratio because of overlap.
Yes! I wish I could have articulated it that clearly for you myself.
Instead of saying we “prefer” an optimal mind-state… you could say we “like” it the most, but that might conflict with your scientific definitions for likes and wants. But here’s an idea, feel free to critique it...
“Likes” are things that actually produce the happiest, optimal mind-states within us
“Wants” are things we prefer, things we tend to choose when influenced by psychological motivators (what we think will make us happy, what we think will make the world happy)
Some things, like smoking, we neither like (or maybe some people do, idk) nor want, but we still do because the physical motivators overpower the psychological motivators (i.e. we have low willpower)
Absolutely!! I’ll check out that link.
Hmmm, so the question I’m thinking about is, “what does it mean to say that a motivation is traced back to something”. It seems to me that the answer to that involves terminal and instrumental values. Like if a person is motivated to do something, but is only motivated to do it to the extent that it leads to the persons terminal value, then it seems that you could say that this motivation can be traced back to that terminal value.
And so now I’m trying to evaluate the claim that “motivations can always be traced back to happiness and goodness”. This seems to be conditional on happiness and goodness being terminal goals for that person. But people could, and often do choose whatever terminal goals they want. For example, people have terminal goals like “self improvement” and “truth” and “be man” and “success”. And so, I think that for a person with a terminal goal other than happiness and goodness, they will have motivations that can’t be traced back to happiness or goodness.
But I think that it’s often the case that motivations can be traced back to happiness and goodness. Hopefully that means something.
Wait… so the Timmy example was used to argue against correspondence theory? Ouch.
Perhaps. Truth might be an exception for some people. Ex. some people may choose to pursue the truth even if it’s guaranteed to lead to decreases in happiness and goodness. And success might also be an exception for some people. They also may choose to pursue success even if it’s guaranteed to lead to decreases in happiness and goodness. But this becomes a question of some sort of social science rather than of philosophy.
I like the concept! I propose that you call it an altruism ratio as opposed to a psychological motivation ratio because I think the former is less likely to confuse people.
Eh, I think that this would conflict with the way people use the word “like” in a similar way to the problems I ran into with “preference”. For example, it makes sense to say that you like mind-state A more than mind-state B. But I’m not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term “like”. Damn language! :)
I had just reached the same conclusion myself! So I think that yeah, happiness and goodness are the only terminal values, for the vast majority of the thinking population :)
Note: I really don’t like the term “happiness” to describe the optimal mind-state since I connect it too strongly with “pleasure” so maybe “satisfaction” would be better. I think of satisfaction as including both feelings of pleasure and feelings of fulfillment. What do you think?
I think that all these are really just instrumental goals that people subconsciously, and perhaps mistakenly, believe will lead them to their real terminal goals of greater personal satisfaction and/or an increase in the world’s satisfaction.
It was an example of whatever convoluted theory my professor invented as a replacement for correspondence theory.
Exactly. I think people like the ones you mention are quite rare.
Ok, thanks :)
What if language isn’t the problem? Maybe the connection between mind-states and actions isn’t so clear-cut after all. If you like mind-state A more than mind-state B, then action A is mind-state-optimizing, but I’m not sure you can go much farther than that… because goodness.
:)
I haven’t found a term that I really like. Utility is my favorite though.
Idk, I want to agree with you but I sense that it’s more like 95% of the population. I know just the 2 people to ask though. My two friends are huge proponents of things like “give it your all” and “be a man”.
Also, what about religious people? Aren’t there things they value independent of happiness and goodness? And if so wouldn’t their motivations reflect that?
Edit:
Friend 1 says it’s ultimately about avoiding feeling bad about himself, which I classify as him wanting to optimize his mind-state.
Friend 2 couldn’t answer my questions and said his decisions aren’t that calculated.
Not too useful after all. I was hoping that they’d be more insightful.
Oooooo I like that term!
It seems clear-cut to me. An action leads to one state of the world, and in that state of the world you have one mind-state. Can you elaborate?
Not sure what you mean by that either.
Yeah, ask those friends if in a situation where “giving it their all” and “being men” made them less happy and made the world a worse place, whether they would still stick with their philosophies. And if they genuinely can’t imagine a situation where they would feel less satisfied after “giving it their all,” then I would postulate that as they’re consciously pursuing these virtues, they’re subconsciously pursuing personal satisfaction. (Edit: Just read a little further, that you already have their responses. Yeah, not too insightful, maybe I’ll develop this idea a bit more and ask the rest of the LW community what they think.) (Edit #2: Thought about this a little more, and I have a question you might be able to answer. Is the subconscious considered psychological or physical?)
As for religious people...well, in the case of Christianity, people would probably just want to “become Christ-like” which, for them, overlaps really well with personal satisfaction and helping others. But in extreme cases, someone might truly aspire to “become obedient to X” in which case obedience could be the terminal value, even if the person doesn’t think obedience will make them happy or make the world a better place. But I think that such ultra-religiosity is rare, and that most people are still ultimately psychologically motivated to either do what they think will make them happy, or what they think will make the world a better place. I feel like this is related to Belief in Belief but I can’t quite articulate the connection. Maybe you’ll understand, if not, I’ll try harder to verbalize it.
No, if that’s all you’re saying, that “If you like mind-state A more than mind-state B, then action A is mind-state-optimizing” then I completely agree! For some reason, I read your sentence (“But I’m not sure that it makes sense to say that you necessarily like action A more than action B, given the way people use the term “like”) and thought you were trying to say they necessarily like action A more..haha, oops
How about this answer: “If that makes me less happy and makes the world a worse place, the world would be decidedly weird in a lot of fundamental and ubiquitous ways. I am unable to comprehend what such a weird world would be like in enough detail to make meaningful statements about what I would do in it.”
Let’s just focus on “giving it your all.” What is “it”?? You surely can’t give everything your all. How do you choose which goals to pursue? “Giving it your all” is a bit abstract.
That’s exactly what I asked them.
The first one took a little prodding but eventually gave a somewhat passable answer. And he’s one of the smartest people I’ve ever met. The second one just refused to address the question. He said he wouldn’t approach it that way and that his decisions aren’t that calculated. I don’t know how you want to explain it, but for pretty much every person I’ve ever met or read, sooner or later they seem to just flinch away from the truth. You seem to be particularly good at not doing that—I don’t think you’ve demonstrated any flinching yet.
And see what I mean about how the ability to not flinch is often the limiting factor? In this case, the question wasn’t really difficult in an intellectual way at all. It just requires you to make a legitimate effort to accept the truth. The truth is often uncomfortable to people, and thus they flinch away, don’t accept it, and fail to make progress.
I could definitely answer that! This really gets at the core of the map vs. the territory (maybe my favorite topic :) ). The physical/psychological distinction are just two maps we use to describe reality. In reality itself, the territory, there’s no such thing as physical/psychological. If you look at the properties of individual atoms, they don’t have any sort of property that says “I’m a physical atom” or “I’m a psychological atom”. They only have properties like mass and electric charge (as far as we know).
I’m not sure how much you know about science, but I find the physics-chemistry-biology spectrum to be a good demonstration of the different levels of maps. Physics tries to model reality as precisely as possible (well, some types of physics that is; others aim to make approximations). Chemistry approximates reality using the equations of physics. Biology approximates reality using the equations of chemistry. And you could even add psychology in there and say that it approximates reality using the ideas (not even equations) of biology.
As far as psychology goes, a little history might be helpful. It’s been a few years since I studied this, but here we go. In the early 1900s, behaviorism was the popular approach to psychology. They just tried to look at what inputs lead to what outputs. Ie. they’d say “if we expose people to situation X, how do they respond”. The input is the situation, and the output is how they respond.
Now, obviously there’s something going on that translates the input to the output. They had the sense that the translation happens in the brain, but it was a black box to them and they had no clue how it works. Furthermore, they sort of saw it as so confusing that there’s no way they could know how it works. And so behaviorists were content to just study what inputs lead to what outputs, and to leave the black box as a mystery.
Then in the 1950s there was the cognitive revolution where they manned up and ventured into the black box. They thought that you could figure out what’s going on in there and how the inputs get translated to outputs.
Now we’re almost ready to go back to your question—I haven’t forgotten about it. So cognitive psychology is sort of about what’s going on in our head and how we process stuff. Regarding the subconscious, even though we’re not conscious of it, there’s still processing going on in that black box, and so the study of that processing still falls under the category of cognitive psychology. But again, cognitive psychology is a high-level map. We’re not there yet, but we’d be better able to understand that black box with a lower level map like neuroscience. And we’d be able to learn even more about the black box using an even lower level map like physics.
If you have any other questions or even just want to chat informally about this stuff please let me know. I love thinking about this stuff and I love trying to explain things (and I like to think I’m pretty good at it) and you’re really good at understanding things and asking good questions which often leads me to think about things differently and learn new things.
Interesting. I had the impression that religious people had lots of other terminal values. So things like “obeying God” aren’t terminal values? I had the impression that most religions teach that you should obey no matter what. That you should obey even if you think it’ll lead to decreases in goodness and happiness. Could you clarify?
Edit: I just realized something that might be important. You emphasize the point that there’s a lot of overlap between happiness/goodness and other potentially terminal values. I haven’t been emphasizing it. I think we both agree that there is the big overlap. And I think we agree that “actions can either be mind-state optimizing, or not mind-state optimizing” and “terminal values are arbitrary”.
I think you’re right to put the emphasis on this and to keep bringing it up as an important reminder. Being important, I should have given it the attention it deserves. Thanks for persisting!
It took me a while to understand belief in belief. I read the sequences about 2 years ago and didn’t understand it until a few weeks ago as I was reading HPMOR. There was a point when one of the characters said he believed something but acted as if he didn’t. Like if believed what he said he believed, he definitely would have done X because X is clearly in his interest. I just reread belief in belief, and now I feel like it makes almost complete sense to me.
From what I understand, the idea with belief in belief is that:
a) There’s your model of how you think the world will look.
b) And then there’s what you say you believe.
To someone who values consistency, a) and b) should be the same thing. But humans are weird, and sometimes a) and b) are different.
In the scenario you describe, there’s a religious person who ultimately wants goodness and would choose goodness over his virtues if he had to pick, but he nevertheless claims that his virtues are terminal goals to him. And so as far as a) goes, you both agree that he would choose goodness over his virtues. But as far as b) goes, you claim to believe different things. What he claims to believe is inconsistent with his model of the world, and so I think you’re right—this would be an example of belief in belief.
Yup, that’s all I’m trying to say. No worries if you misunderstood :). I hadn’t realized that this was ultimately all I was trying to say before talking to you and now I have, so thank you!
Well, thanks! How does that saying go? What is true is already so? Although in the context of this conversation, I can’t say there’s anything inherently wrong with flinching; it could help fulfill someone’s terminal value of happiness. It someone doesn’t feel dissatisfied with himself and his lack of progress, what rational reason is there for him to pursue the truth? Obviously, I would prefer to live in a world where relentlessly pursuing the truth led everyone to their optimal mind-states, but in reality this probably isn’t the case. I think “truth” is just another instrumental goal (it’s definitely one of mine) that leads to both happiness and goodness.
Yeah! I think I first typed the question as “is it physical or psychological?” and then caught myself and rephrased, adding the word “considered” :) I just wanted to make sure I’m not using scientific terms with accepted definitions that I’m unaware of. Thanks for your answer!! You are really good at explaining stuff. I think the “cognitive psychology” is related to what I just read about last week in the ebook too, about neural networks, the two different brain map models, and the bleggs and rubes.
I don’t know your religious background, but if you don’t have one, that’s really impressive, given that you haven’t actually experienced much belief-in-belief since Santa (if you ever did). But yeah, basically, this sentences summarizes perfectly:
Any time a Christian does anything but pray for others, do faith-strengthening activities, spread the gospel, or earn money to donate to missionaries, he is anticipating as if God/hell doesn’t exist. I realized this, and sometimes tried to convince myself and others that we were acting wrongly by not being more devout. I couldn’t shake the notion that spending time having fun instead of praying or sharing the gospel was somehow wrong because it went against God’s will of wanting all men being saved, and I believed God’s will, by definition, was right. But I still acted in accordance with my personal happiness some of the time. I said God’s will was the only an end-in-itself, but I didn’t act like it. So like you said, inconsistency. Thanks for helping me with the connection there.
http://wiki.lesswrong.com/wiki/Litany_of_Gendlin
I agree with you that there’s nothing inherently wrong with it, but I don’t think this is a case of someone making a conscious decision to pursue their terminal goals. I think it’s a case of “I’m just going to follow my impulse without thinking”.
Haha thanks. I can’t remember ever believing in belief, but studying this rationality stuff actually teaches you a lot about how other people think.
I was raised Jewish, but people around me were about as not religious as it gets. I think it’s called Reform Judiasm. In practice it just means, “go to Hebrew school, have a Bar/Bat Mitzvah, celebrate like 3-4 holidays a year and believe whatever you want without being a blatant atheist”.
I’m 22 years old and I genuinely can’t remember the last time I believed in any of it through. I had my Bar Mitzvah when I was 13 and I remember not wanting to do it and thinking that it’s all BS. Actually I think I remember being in Hebrew school one time when we were being taught about God and I at the time believed in God, and I was curious how they knew that God existed and I asked, and they basically just said, “we just know”, and I remember being annoyed by that answer. And now I’m remembering being confused because I wanted to know what God really was, and some people told me he was human-like and had form, and some people just told me he was invisible.
I will say that I thoroughly enjoy Jewish humor though, and I thank the Jews very much for that :). Jews love making fun of their Jewish mannerisms, and it’s all in good fun. Even things that might seem mean are taken in good spirit.
Hey, um… I have a question. I’m not sure if you’re comfortable talking about it though. Please feel free to not answer.
It sounds really stressful believing that stuff. Like it seems that even people with the strongest faith spend some time deviating from those instructions and do things like have fun or pursue their personal interests. And then you’d feel guilty about that. Come to think of it, it sounds similar to my guilt for ever spending time not pursuing ambitions.
And what about believing in Hell? From what I understand, Christians believe that there’s a very non-negligible chance that you end up in Hell, suffering unimaginably for eternity. I’m not exaggerating at all when I say that if I believed that, I would be in a mental hospital crying hysterically and trying my absolute hardest to be a good person and avoid ending up in Hell. Death is one of my biggest fears, and I also fear the possibility of something similar to Hell, even though I think it’s a small possibility. Anyway, I never understood how people could legitimately believe in Hell and just go about their lives like everything is normal.
Few people make that many conscious decisions! But it could be a subconscious decision that still fulfills the goal. For my little sister, this kind of thing actually is a conscious decision. Last Christmas break when I first realized that unlike almost all of my close friends and family in Wisconsin, I didn’t like our governor all that much, she eventually cut me off, saying, “Dad and I aren’t like you, Ellen. We don’t like thinking about difficult issues.” Honesty, self-awareness, and consciously-selected ugh fields run in the family, I guess.
That’s funny. I just met someone like you, probably also a Reform Jew, who told me some jokes and about all these Jewish stereotypes that I had never even heard of, and they seem to fit pretty well.
It’s exactly like that, just multiplied times infinity (adjusted for scope insensitivity) because hell is eternal.
Yeah, hell is basically what led me away from Christianity. If you’re really curious, how convenient, I wrote about it here to explain myself to my Christian friends. You’ll probably find it interesting. You can see how recent this is for me and imagine what a perfect resource the rationality book has been. I just wish I had discovered it just a few weeks earlier, when I was in the middle of dozens of religious discussions with people, but I think I did an okay job explaining myself and talking about biases I had recognized in myself but didn’t even know were considered “biases” like not giving much weight to evidence that opposes your preferred belief (label: confirmation bias) and the tendency to believe what people around you believe (label: I forget, but at least I now know it has one) and many more.
But how did I survive, believing in hell? Well, there’s this wonderful book of the Bible called Ecclesiastes that seems to mostly contradict the rest of Christian teachings. Most people find it depressing. Personally, I loved it and read it every week to comfort myself. I still like it, actually. It’s short, you could read it in no time, but here’s a sample from chapter 3: 18-22:
Indeed.
True. In the case of my friend, I don’t think it was, but in cases where it is, then I think that it could be a perfectly sensible approach (depending on the situation).
This was the relevant part of the conversation:
It’s possible that he had legitimately decided earlier to not put that much calculation into these sorts of decisions, because he thinks that this strategy will best lead to his terminal goals of happiness or goodness or whatever. But this situation actually didn’t involve any calculation at all. The calculations were done for him already—he just had to choose between the results.
To me it seems more likely that he a) is not at all used to making cost-benefit analyses and makes his decisions by listening to his impressions of how virtuous things seem. And b) in situations of choosing between options that both produce unpleasant feelings of unvirtuousness, he flinches away from the reality of the (hypothetical) situation.
I should mention that I think that >99% of people are quite quite stupid. Most people don’t seem very agenty to me, given the way I define it. Most people seem to not put much thought behind the overwhelming majority of what they do and think and instead just respond to their immediate feelings and rationalize it afterwards. Most people don’t seem to have the open-mindedness to give consideration to ideas that go against their impulses (this isn’t to say that these impulses are useless), nor the strength to admit hard truths and choose an option in a lose-lose scenario.
Really, I don’t know how to word my thoughts very well on this topic. Eliezer addresses a lot of the mistakes people make in his articles. It’d take some time for me to really write up my thoughts on this. And I know that it makes me sound like a Bad Person for thinking that >99% people are really stupid, but unfortunate truths have to be dealt with. The following isn’t a particularly good argument, but perhaps it’s an intuitive one: consider how we think people 200 years ago were stupid, and people 200 years ago think people 400 years ago were stupid etc. (I don’t think this means that everyone will always be stupid. Ie. I think that not being stupid means something in an absolute sense, not just a relative one).
I’m truly truly sorry that you had experienced this. No one should ever have to feel that. If there’s anything I could do or say to help, please let me know.
I had actually seen the link when I looked back at your first post in the welcome thread at some point. I confess that I just skimmed it briefly and didn’t pick up on the core idea. However, I’ve just read it more carefully.
I love your literary device. The Banana Tree thought experiment and analogy that is (I don’t actually know what I literary device is). And the fact that people believe that—a) God is caring, AND b) God created Hell and set the circumstances up where millions/billions of people will end up there—is… let’s just say inconsistent by any reasonable definition of the words consistent, caring and suffering.
In the same way that you talk about how God is bad for creating Hell, I actually think something similar about life itself. I’m a bit pessimistic. The happiness set point theory says that we have happiness set points and that we may temporarily deviate above or below them, but that we end up hovering back to our set points.
Furthermore, this set point seems to be quite neutral and quite consistent amongst humans. What I mean by neutral is that minute-to-minute, most people seem to be in a “chill” state of mind, not really happy or sad. And we don’t spend too much time deviating from that. And there’s also the reality that we’re all destined to die. Why does life have to be mediocre? Why can’t it be great? Why do we all have to get sick and die? I don’t know how or if reality was “created”, but to anthropomorphize, why did the creator make it like this? From the perspective of pre-origin-of-reality (if that’s even a thing), I feel the same feelings about neutralness that you expressed about the badness of Hell (but obviously Hell is far worse than neutralness). From a pre-origin perspective, reality could just as easily have been amazing and wonderful, so the fact that it’s neutral and fleeting seems… disappointing?
If it got you through believing in hell, I will most certainly read it.
So a possible distinction between virtue ethicists and consequentialists: virtue ethicists pursue their terminal values of happiness and goodness subconsciously, while consequentialists pursue the same terminal values consciously… as a general rule? And so the consequentialists seem more agenty because they put more thought into their decisions?
Yeah, that’s what I was trying to get across, and it’s why I titled the post “Do You Feel Selfish for Liking What You Believe”! I hesitated to include the analogy since it was the only part with the potential to offend people (two people accused me of mocking God) and taint their thoughts about the rest of the post, but in the end I left it, partly as a hopefully thought-provoking interlude between the more theological sections and mostly so I could give my page a more fun title than Deconversion Story Blog #59845374987.
The happiness set point theory makes sense! Actually, it makes a lot of sense, and I think it’s connected to the idea that most people do not act in agenty ways! If they did, I think they could increase their happiness. Personally, I don’t find that it applies to me much at all. My happiness has steadily risen throughout my life. I am happier now than ever before. I am now dubbing myself a super-agent. I think the key to happiness is to weed not only the bad stuff out of your life, but the neutral stuff as well. Let me share some examples:
I got a huge scholarship after high school to pursue a career in the medicine field (I never expected to love my career, but that wasn’t the goal; I wanted to fund lots of missionaries). I was good at my science classes, and I didn’t dislike them, but I didn’t like them either. I realized this after my first year of college. I acknowledged the sunk cost fallacy, cut my losses, wrote a long, friendly letter to the benefactor to assuage my guilt, and decided to pursue another easy high-income career instead, law, which would allow me to major in anything I wanted. So I sat down for a few hours, considered like 6 different majors, evaluated the advantages and disadvantages, and came up with a tie between Economics and Spanish. I liked Econ for many reasons, but mainly because the subject matter itself was truly fascinating to me; I liked Spanish not so much for the language itself but because the professor was hilarious, fun, casual, and flexible about test/paper deadlines, I could save money by graduating in only 3 years, and I would get the chance to travel abroad. I flipped a coin between the two, and majored in Spanish. Result: a lasting increase in happiness.
My last summer after college, I was a cook at a boy scout camp. It was my third summer there. I worked about 80 hours a week, and the first two years I loved it because my co-workers were awesome. We would have giant (dumping 5 gallon igloos on each other in the middle of the kitchen, standing on the roof and dropping regular balloons filled with water on each other, etc) water fights in the kitchen, we would play cribbage in between meals, hang out together, etc. I also had two good friends among the counselors. Anyway, that third year, my friends had left and it was still a pretty good job in a pretty and foresty area, but it wasn’t super fun like it had been. So after the first half of the summer, once I had earned enough to pay the last of my college debt, I found someone to replace me at my job and wrote out pages of really detailed instructions for everything (to assuage my guilt), and quit, to go spend a month “on vacation” at home with my family before leaving for Guatemala. Result: a lasting increase in happiness.
I dropped down to work part-time in Guatemala to pursue competitive running more. I left as soon as I got a stress fracture. I chose a family to nanny for based on the family itself, knowing that would affect my day-to-day happiness more than the location (which also turned out to be great).
My belief in God was about to cause not only logical discontent in my mind, but also a suboptimal level of real life contentment that I could not simply turn into an “ugh field” as I almost set off to pursue a career I didn’t love to donate to missionaries. Whatever real-life security benefits it brought me were about to become negligible, so I finally spent a few very long and thoughtful days confronting my doubts and freed myself from that belief.
Every day examples of inertia-breaking happiness-inducing activities: I’m going for a run and run past a lilac bush. It smells really good, so I stop my watch and go stand by it for a while. I’m driving in the car, and there’s a pretty lookout spot, so I actually stop for a while. I do my favorite activities like board games, pickup sports, and nature stuff like hiking and camping every weekend, not just once in a while. I don’t watch TV because there’s always something I’d rather be doing. If I randomly wake up early, I consciously think about whether I would get more satisfaction out of lazing around in bed, or getting up to make a special breakfast for the kids I nanny for.
What’s my point? I have very noticeably different happiness levels based on the actions I take. If I’m just going with the flow, taking life as it comes, I have an average amount of happiness compared to those around me; I occasionally do let myself slip into neutral situations. If I put myself in a super fun and amazing situation, I have way more happiness than those around me (which is a good thing, since happiness is contagious). Sometimes I just look at my life and can’t help but laugh with delight at how wonderful it is. If I ever get a sense that my happiness is starting to neutralize/stabilize, I make a big change and get it back on the right track. For instance, I think that thanks to you, I have just realized that my happiness is not composed of pleasure alone, but also personal fulfillment. I always knew that “personal fulfillment” influenced other people, but I’m either just realizing/admitting this to myself, or my preferences are changing a bit as I get older, but I think it influences me too. So, I’m spending some time reading and thinking and writing, instead of only playing games and reading fiction and cooking and hiking. Result: I am even happier than I knew possible :)
Maybe I don’t fully understand that happiness set point theory, but I don’t think it is true for everyone, just 99% of people or so. I don’t think it is true for me. That said, I will acknowledge that an individual’s range of potential happiness levels is fixed. Some happy-born people, no matter how bad their lives get, will never become as unhappy as naturally unhappy people with seemingly good lives are.
tl;dr Being an agent is awesome!
[mind officially blown]
Ok, could we like Skype or something and you tell me everything you know about being happy and all of your experiences? I have a lot to learn and I enjoy hearing your stories!
Also, idk if you’ve come across this yet but what you’re doing is something that us lesswrongers like to call WINNING. Which is something that lesswrongers actually seem to struggle with quite a bit. There’s a handful of posts on it if you google. Anyway, not only are you killing it, but you seem to be doing it on purpose rather than just getting lucky. This amount of success with this amount of intentionality just must be analyzed.
You sound like you are somewhat intimidated by the people here and that they all seem super smart and everything. Don’t be. Your ability to legitimately analyze things and steer your life in the direction you want it is way more rare than you’d guess. You should seriously write about your ideas and experiences here for everyone to benefit from.
Or maybe you shouldn’t. Idk. You probably already know this, but never just listen to me or what someone else tells you (obviously). My point really is that I sense that others could legitimately benefit from your stories—idk if you judge that writing about it is the best thing for you to be doing though.
Sorry if I’m being weird. Idk. Anyway, here are the beginnings of a lot of questions I have:
Your idea to avoid not only negative things but also neutral things sounded pretty good at first, and then made a lot more sense when I heard your examples. I started thinking about my own life and the choices I’ve made and am starting to see that your approach probably would have made me better off. But… I can’t help but point out that it can’t always be true. Sometimes the upfront costs of mediocrity must be worth the longer term benefits right? But it seems like a great rule-of-thumb. Why? What makes a good rule-of-thumb? Well, my impression is that aside from being mostly right, it’s about being mostly right in a way that people normally don’t get right. Ie. being useful. And settling for neutralness instead of awesomeness seems to be a mistake that people make a lot. My friends give me shit for being close-minded (which I just laugh at). They point out how I almost never get convinced and change my mind (which is because normal people almost never think of things that I haven’t taken into consideration myself already). Anyway, I think that this may actually change my outlook on life and lead to a change in behavior. Congratulations. …so my question here was “do you just consider this a rule of thumb, and to what extent?”
This question is more just about you as a case study rather than your philosophy (I hope that doesn’t make me sound too much like a robot) - how often do you find yourself sacrificing the short term for the long term? And what is your thinking in these scenarios? And in the scenarios when you choose not to? Stories are probably useful.
You say you did competitive running. Forgive me, but I’ve never understood competitive running. It’s so painful! I get that lighter runs can be pleasant, but competitive running seems like prolonged pain to me. And so I’m surprised to hear that you did that. But I anticipate that you had good reason for doing so. Because 1) it seems to go against your natural philosophy, and you wouldn’t deviate from your natural philosophy randomly (a Bayesian would say that the prior probability of this is low) and 2) you’ve demonstrated to be someone who reasons well and is a PC (~an agent).
There’s an interesting conversation to be had about video games/TV and happiness vs. “physical motivators”. I’m a huge anti-fan of videogames/TV too. I have a feeling you have some good thoughts on this.
Your thoughts on the extent to which strategic thinking is worth it. I see a cost-benefit of stress vs. increased likelihood of good decision. Also, related topic—I notice that you said you spent a big chunk of time making that major decision. One of my recent theories as to how I could be happier and more productive is to allocate these big chunks of time, and then not stress over optimizing the remaining small chunks of time, due to what I judge are the cost-benefit analyses. But historically, I tend to overthink things and suffer from the stress of doing so. A big part of this is because I see the opportunity to analyze things strategically everywhere, and every time I notice myself forgoing an opportunity, I kick myself. I know its not rational to pursue every analysis, but… my thoughts are a bit jumbled.
Just a note—I hope rationality doesn’t taint you in any way. I sense that you should err on the site of maintaining your approach. Incremental increases in rationality usually don’t lead to incremental increases in winning, so be careful. There’s a post on that somewhere I could look up for you if you want. Have you thought about this? If so, what have your thoughts been?
Do find mocking reality to be fun? I do sometimes. That didn’t make sense—let me explain. At some point in my junior year of college I decided to stop looking at my grades. I never took school seriously at all (since middle school at least). I enjoyed messing around. On the surface this may seem like I’m risking not achieving the outcomes I want, and that’s true, but it has the benefit of being fun, and I think that people really underestimate this. It was easy for me to not take school seriously, but I should probably apply this in life more. Idk. I’m also sort of good at taking materialistic things really not seriously. I ripped up $60 once to prove to myself that it really doesn’t matter :0. And it made me wayyy too happy, which is why I haven’t done it since (idk if that’s really really weird of me or not). I would joke around with my friends and say, “Yo, you wanna rip?”. And I really was offering them my own money up to say $100 to rip up so they could experience it for themselves. (And I fully admit that this was selfish because that money could have gone to starving kids, but so could a lot of the money I and everyone else spends. It was simply a trade of money for happiness, and it was one of the more successful ones I’ve made.) Anyway, I noticed that you flipped a coin to decide your major and got some sort of impression that something like this is your reasoning. But I only estimate a 20-30% probability of that.
I’m curious how much your happiness actually increased throughout your life. You seem to be evidence against the set point theory, which is huge. Or rather, that the set point theory in its most basic form is missing some things.
Actually, I should say that I’m probably getting a little carried away with my impressions and praise. I have to remember to take biases into account and acknowledge and communicate the truth. I have a tendency to get carried away when I come across certain ideas (don’t we all?). But I genuinely don’t think I’m getting that carried away.
Thoughts on long term planning.
Um, I’ll stop for now.
Time to go question every life decision I’ve ever made.
Hahaha, reading such fanmail just increased my happiness even more :) Sure, we can skype sometime. I’m going to wrap up my thoughts on terminal values first and then I’ll respond more thoroughly to all this, and maybe you can help me articulate some ideas that would be useful to share!
In the meantime, this reminded me of another little happiness tip I could share. So I don’t know if you’ve heard of the five “love languages” but they are words of affirmation, acts of service, quality time, gifts, and physical touch. Everyone gives and receives in different ways. For example, I like receiving words of affirmation, and I like giving quality time. My mom likes receiving in physical touch, and giving in acts of service. The family I nanny for (in general) likes receiving in quality time and giving in gifts (like my new kindle which they gave me just in time to get the rationality ebook!) For people that you spend a lot of time with-family, partner, best friends, boss, co-workers-this can be worthwhile to casually bring up in conversation. Now when people know words of affirmation make me happy, they’ll be more likely to let me know when they think of something good about me or appreciate something I do. If I know the family I nanny for values quality time, I might sit around the table and chat with them an extra hour even though I’m itching to go read more of the rationality book. I know my mom values physical touch, so I hug her a lot and stuff even though I’m not generally super touchy. Happiness all around, although these decisions do get to be habits pretty quickly and don’t require much conscious effort :)
Ok, take your time. And sorry for continuing to bombard you.
Happily!
Interesting. I’ll ask more about this in the future when you’re ready.
http://lesswrong.com/r/discussion/lw/m3b/do_terminal_virtues_exist/
Just submitted my first article! I really should have asked you to edit it… if you have any suggestions of stuff/wording to change, let me know, quick!
Anyway, I’ll go reply to your happiness questions now :)
First very quick glance, there’s some things I would change. I’ll try to offer my thoughts quickly.
Edit: LW really need a better way of collaboration. Ex. https://medium.com/about/dont-write-alone-8304190661d4. One of the things I want to do is revamp this website. Helping rational people interact and pursue things seems to be relatively high impact.
Hey, no rush. It’s a big topic and I don’t want to overwhelm you (or me!) by jumping around so much. Was there anything else you wanted to finish up first? Do you want to take a break from this intense conversation? I really don’t want to put any pressure on you.
Thanks so much!!
Ok, yeah, let’s take a little break! I’m actually about to go on a road trip to the Grand Canyon, and should really start thinking about the trip and get together some good playlists/podcasts to listen to on the drive. I’ll be back on Tuesday though and will be ready jump back into the conversation :)
Awesome! Ok, whatever works for you.
Also:
I learned something new and seemingly relevant to this discussion listening to a podcast on the way home from the Grand Canyon: Maslow’s hierarchy of needs, which as knowledgeable as you seem, you’re probably already familiar with. Anyway, I think I’ve been doing just fine on the bottom four my whole life. But here’s the fifth one:
Self-Actualization needs—realizing personal potential, self-fulfillment, seeking personal growth and peak experiences.
So it seems like I’m working backwards on this self-actualization list now. I’ve had tons of super cool peak experiences already. Now, for the first time, I’m kind of interested in personal growth, too. On the page I linked, it talked about characteristics of self-actualizers and behavior of self-actualizers… I think it all describes me already, except for “taking responsibility and working hard” and maybe I should just trust this psychology research and assume that if I become ambitious about something, it will actually make me even happier. What do you think? Have you learned much psychology? How relevant is this to rationality and intentionally making “winning” choices?
:) I remember reading about it for the first time in the parking lot when I was waiting for my Mom to finish up at the butcher. (I remember the place I was at when I learned a lot of things)
Psychology is very interesting to me and I know a pretty good amount about it. As far as things I’m knowledgeable about, I know a decent amount about: rationality, web development, startups, neuroscience and psychology (and basketball!). And I know a little bit about economics, science in general, philosophy, and maybe business.
Interesting. I actually figured that you were good with the top one too. For now, I’ll just say that I see it as more of a multiplier than a hole to be filled up. Ie. someone with neutral self-actualization would mostly be fine—you multiply zero (neutral) by BigNumber. Contrast this with a hole-to-be-filled-up view, where you’re as fulfilled as the hole is full. (Note that I just made this up; these aren’t actual models, as far as I know). Anyway, in the multiplier view, neutral is much much better than negative, because the negative is multiplied by BigNumber. So please be careful!
Hi again :) I’m back from vacation and ready to continue our happiness discussion! I’m not sure how useful this will be since happiness is so subjective, but I’m more than willing to be analyzed as a case study, it sounds fun!
Oh, I still am! I wouldn’t trade my ability to make happiness-boosting choices for all their scientific and historical knowledge, but that doesn’t mean I’m not humbled and impressed by it. Now for your bullet points...
Avoiding neutralness isn’t actually a rule of thumb I’ve consciously followed or anything. It just seemed like a good way to summarize the examples I thought of of acting to increase my happiness. It does seem like a useful rule of thumb though, and I’m psyched that you think it could help you/others to be happier :) I might even consciously follow it myself from now on. But you ask whether the upfront costs of avoiding mediocrity are sometimes worth the long term benefits… you may well be right, but I can’t come up with any examples off the top of my head. Can you?
I don’t have any clear strategies for choosing between short-term vs. long-term happiness. I think my general tendency is to favor short-term happiness, kind of a “live in the moment” approach to life. Obviously, this can’t be taken too far, or we’ll just sit around eating ice cream all day. Maybe a good rule of thumb—increase your short-term happiness as much as possible without doing anything that would have clear negative affects on your long-term happiness? Do things that make you happy in the short-term iff you think there’s a very low probability you’ll regret them? I think in general people place too much emphasis on the long-term. Like me choosing to change my major. If I ultimately were going to end up in a career I didn’t love, and I had already accepted that, what difference did it make what I majored in? In the long term, no predictable difference. But in the short-term, those last 2 years would quite possibly account for over 2% of my life. Which is more than enough to matter, more than enough to justify a day or two in deep contemplation. I think that if I consistently act in accordance with my short-term happiness, (and avoiding long-term unhappiness like spending all my money and having nothing left for retirement or eating junk food and getting fat) I’ll consistently be pretty happy. Could I achieve greater total happiness if I focused only on the long-term? Maybe! But I seem so happy right now, the potential reward doesn’t seem worth the risk.
I love that you asked about my competitive running. I do enjoy running, but I rarely push myself hard when I’m running on my own. The truth is, I wouldn’t have done it on my own. Running was a social thing for me. My best friend there was a Guatemalan “elite” (much lower standard in for this there than in the US, of course), and I was just a bit faster than she was. So we trained together, and almost every single practice was a little bit easier for me than it was for her. Gradually, we both improved a ton and ran faster and faster times, but I was always training one small notch below what I could have been doing, so it didn’t get too painful. In the races, my strategy was always negative splits—start out slowly, then pass people at the end. This was less painful and more fun. Of course, there was some pain involved, but I could short-term sacrifice a few minutes of pain in a race for long-term benefits of prize money and feeling good about the race the whole next week. But again, it was the social aspect that got me into competitive running. I never would have pursued it all on my own; it was just a great chance to hang out with friends, practice my Spanish, stay fit, and get some fresh air.
Is strategic thinking worth it? I have no idea! I don’t think strategically on purpose; I just can’t help it. As far as I know, I was born thinking this way. We took a “strengths quest” personality test in college and “Strategic” was my number one strength. (My other four were relator, ideation, competitive, and analytical). I’m just wired to do cost-benefit analyses, I guess. Come to think of it, those strengths probably play a big role in my happiness and rationality. But for someone who isn’t instinctively strategic, how important are cost-benefit analyses? I like your idea of allocating large chunks of time, but not worrying too much in the day-to-day stuff. This kind of goes back to consequentialism vs. virtue ethics. Ask yourself what genuinely makes you happy. If it’s satisfying curiosity, just aim to ‘become more curious’ as an instrumental goal. Maybe you’ll spend time learning something new when you actually would have been happier spending that time chatting with friends, but instrumental goals are convenient and if they’re chosen well, I don’t think they’ll steer you wrong very often. Then, if you need to, maybe set aside some time every so often and analyze how much time you spend each day doing which activities. Maybe rank them according to how much happiness they give you (both long and short term, no easy task) and see if you spend time doing something that makes you a little happy, but may not be the most efficient way to maximize your happiness. Look for things that make you really happy that you don’t do often enough. Don’t let inertia control you too much, either. There’s an old saying among runners that the hardest step is the first step out the door, and it’s true. I know I’ll almost always be glad once I’m running, and feel good afterward. If I ever run for like 5 minutes and still don’t feel like running, I’ll just turn around and go home. This has happened maybe 5 times, so overall, forcing myself to run even when I don’t think I feel like it has been a good strategy.
Thanks! I don’t think it will taint me too much. Honestly, I think I had exceptionally strong rationality skills even before I started reading the ebook. Some people have lots of knowledge, great communication skills, are very responsible, etc...and they’re rational. I haven’t developed those other skills so well (yet), but at least I’m pretty good at thinking. So yeah, honestly I don’t think that reading it is going to make me happy in that it’s going to lead me to make many superior decisions (I think we agree I’ve been doing alright for myself) but it is going to make me happy in other ways. Mostly identity-seeking ways, probably.
I got a kick out of your money ripping story. I can definitely see how that could make you way more happy than spending it on a few restaurant meals, or a new pair of shoes, or some other materialistic thing :) I wouldn’t do it myself, but I think it’s cool! As for not taking school seriously for the sake of fun, I can relate… I took pride in strategically avoiding homework, studying for tests and writing outlines for papers during other classes, basically putting in as little effort as I could get away with and still get good grades (which I wanted 90% because big scholarship money was worth the small trade-off and 10% simply because my competitive nature would be annoyed if someone else did better than I did). In hindsight, I think it would have been cool to pay more attention in school and come out with some actual knowledge, but would I trade that knowledge for the hours of fun hanging out with my neighbors and talking and playing board games with my family after school? Probably not, so I can’t even say I regret my decision. As for me flipping a coin… I think that goes with your question about how much cost-benefit analysis it’s actually worthwhile to do. I seriously considered like 6 majors, narrowed it down to 2, and both seemed like great choices. I think I (subconsciously) thought of diminishing marginal returns and risk-reward here. I had already put a lot of thought into this, and there was no clear winner. What was the chance I would suddenly have a new insight and a clear winner would emerge if I just invested a few more hours of analysis, even with no new information? Not very high, so I quit while I was ahead and flipped a coin.
How much has my happiness actually increased? Some (probably due to an increase in autonomy when I left home) but not a ton, really… because I believe in a large, set happiness range, and the decisions I make keep me at the high end of it. But like I said, sometimes it will decrease to a “normal” level, and it’s soo easy to imagine just letting it stay there and not taking action.
I don’t think you’re getting carried away, either, but maybe we just think really alike :) but happiness is important to everyone, so if there’s any way it could be analyzed to help people, it seems worth a try
Long-term planning depends on an individual’s values. Personally I think most people overrate it a bit, but it all depends on what actually makes a person happy.
I think that’s “true” in practice, but not in theory. An important distinction to make.
Definitely.
The problem is that I’m not completely sure :/. I think a lot of it falls under the category of being attached to their beliefs though. Here’s an example: I was just at lunch with a fellow programmer. He said that “the more programmers you put on a project the better”, and he meant it as an absolute rule. I pointed out the incredibly obvious point that it depends on the trade off between how much they cost and how much profit they bring in. He didn’t want to believe that he was wrong, and so he didn’t actually give consideration to what I was saying, and he continues to believe that “the more programmers you put on a project the better”.
This is an extreme case, but I think that analogous things happen all the time. The way I think about it, knowledge and aptitude don’t even really come in to play, because close-mindedness limits you so much earlier on than knowledge and aptitude do. “Not stupid” is probably a better term than “smart”. To me, in order to be “not stupid”, you just have to be open-minded enough to give things an honest consideration and not stubbornly stick to what you originally believe no matter what.
In short, I think I’d say that, to me, it’s mostly about just giving an honest effort (which is a lot harder than it sounds).
What are your objectives with this blog? To convince people? Because you like writing?
Edit: idea—maybe your way of having an impact on the world is to just keep living your awesome and happy life and lead by example. Maybe you could blog about it too. Idk. But I think that just seeing examples of people like you is inspiring, and could really have a pretty big impact. It’s inspired me.
Haha, what?? Interesting.
Aha, so basically, to you, stupidity involves a lot of flinching away from ideas or evidence that contradict someone’s preconceived notions about the world. And lack of effort at overcoming bias. Yeah, most people are like that, even lots of people with high IQ’s and phd’s. I think you’re defining “stupid” as “irrational thinking + ugh fields” which was what I originally thought you meant until I read your example about past vs. present. Why do you think we’ll be less stupid in the future then? Just optimism, or is this connected to your thoughts on AI?
In the case of the only three posts I’ve done, they were just to defend myself, encourage anyone else who was going through similar doubts, and stir up some cognitive dissonance. I do like writing though (not so much writing itself, I have a hard time choosing the right words… but I love sharing ideas) and maybe I will soon blog about how rationality can improve happiness :) :) I actually am just about to write a “Terminal Virtues” post and share my first idea on LW. And then I want to write something with far more practical value, a guide to communicating effectively and getting along well with less rational people :)
Aw, well thanks! I am enjoying this conversation immensely, partly because I’ve never talked to someone else who was so strategic, analytic, and open-minded before, and knowledgeable, and I really appreciate those qualities. And partly because I feel like even the occasional people who think I’m awesome don’t appreciate me for quite the same reason I’ve always “appreciated” myself, which I always thought was “because I’m pretty good at thinking” which I can now call “rationality” :)
In practice, it seems to me that a lot of virtue ethicists value happiness and goodness a lot. But in theory, there’s nothing about being a virtue ethicist that says anything about what the virtues themselves are.
But I’m realizing that my incredibly literal way of thinking about this may not be that useful and that the things you’re paying attention to may be more useful. But at the same time, being literal and precise is often really important. I think that in this case both we could do both, and as a team we have :)
Exactly. Another possibly good way to put it. People who are smart in the traditional way (high IQ, PhD...) have their smartness limited very much to certain domains. Ie. there might be a brilliant mathematician who has proved incredibly difficult theorems, but just doesn’t have the strength to admit that certain basic things are true. I see a lot of traditionally smart people act very stupidly in certain domains. To me, I judge people at their worst when it comes to “not stupidness”, which is why I have perhaps extreme views. Idk, it makes sense to me. There’s something to be said for the ability to not stoop to a really low level. Maybe that’s a good way to put it—I judge people based on the lowness they’re capable of stooping to. (Man, I’m loosing track of how many important things I’ve come across in talking to you.)
And similarly with morality—I very much judge people by how they act when it’s difficult to be nice. I hate when people meet someone new and conclude that they’re “so nice” just because they acted socially appropriate by making small talk and being polite. Try seeing how that person acts when they’re frustrated and are protected by the anonymity of being in a car. The difference between people at their best and their worst is huge. This clip explains exactly what I mean better than I could. (I love some of the IMO great comedians like Louis CK, Larry David and Seinfeld. I think they make a handful of legitimately insightful points about society, and they articulate and explain things in ways that make so much sense. In an intellectual sense, I understand how difficult it is to communicate things in such an intuitive way. Every little subtlety is important, and you really have to break things down to their essence. So I’m impressed by a lot of comedians in an intellectual sense, and I don’t know many others who think like that.).
And I take pride in never/very rarely stooping to these low levels. I love basketball and play pick up a lot and it’s amazing how horrible people are and how low they stoop. Cheating, bullying, fighting, selfishness, pathetic and embarrassing ego dances etc. I never cheat, ever (and needless to say I would never do any of the other pathetic stuff). And people know this and never argue with me (well, not everyone).
Oh. I love trying to find the right words. Well, sometimes it could be difficult, but I find it to be a “good difficult”. One of my favorite things to do, and one of the two or three things I think I’m most skilled at, is breaking things down to their essence. And that’s often what I think choosing the right words is about. (Although these comments aren’t exactly works of art :) )
To the extent that your goal here is to influence people, I think it’s worth being strategic about. I could offer some thoughts if you’d like. For example, that blogger site you’re using doesn’t seem to get much audience—a site like https://medium.com/ might allow you to reach more people (and has a much nicer UI).
This is a really small point though, and there are a lot of other things to consider if you want to influence people. http://www.2uo.de/influence/ is a great book on how to influence people. It’s one of the Dark Arts of rationality. If you’re interested, I’d recommend putting it on your reading list. If you’re a little interested, I’d just recommend taking 5-10 minutes to read that post. If you’re not very interested, which something tells me is somewhat likely to be true, just forget it :)
One reason why I like writing is so I could refer people to my writing instead of having to explain it 100 times. Not that I ever mind explaining things, but at the same time it is convenient to just link to an article.
But a lot of people “write for themselves”. Ie. they like to get their ideas down in words or whatever, but they make it available in case people want to read it.
I try :)
Are you trying to be modest? I can’t imagine anyone not thinking that you’re awesome.
Yea, I feel the same way, although it doesn’t bother me. It takes a rational person to appreciate another rational person (“real recognize real”), and I don’t have very high expectations of normal people.
I’ve tried to clarify my thoughts a bit:
Terminal values are ends-in-themselves. They are psychological motivators, reasons that explain decisions. (Physical motivators like addiction and inertia can also explain our decisions, but a rational person might wish to overcome them.) For most people, the only true terminal values are happiness and goodness. There is almost always significant overlap between the two. Someone who truly has a terminal value that can’t be traced back to happiness or goodness in some way is either (a) ultra-religious or (b) a special case for the social sciences.
Happiness (“likes”) refers to the optimalness of your mind-state. Hedonistic pleasure and personal fulfillment are examples of things that contribute to happiness.
Goodness refers to what leads to a happier outcome for others.
Preferences (“wants”) are what we tend to choose. These can be based on psychological or physical motivators.
Instrumental values are goals or virtues that we think will best satisfy the terminal values of happiness and goodness.
We are not always aware of what actually leads to optimal mind-states in ourselves and others.
Sounds good to me! Given the way you’ve defined things.
Edit: So what do you conclude about morality from this?
Good question. I conclude that morality (which, as far as I can tell, seems like the same thing as goodness and altruism) does exist, that our desire to be moral is the result of evolution (thanks for your scientific backup) just as much as our selfish desires are results of evolution. Whatever you call happiness, goodness falls into the same category. I think that some people are mystified when they make decisions that inefficiently optimize their happiness (like all those examples we talked about), but they shouldn’t be. Goodness is a terminal value too.
Also, morality is relative. How moral you are can be measured by some kind of altruism ratio that compares your terminal values of happiness and goodness. Someone can be “more moral” than others in the sense that he would be motivated more by goodness/altruism than he is by his own personal satisfaction, relative to them.
Is there any value in this idea? No practical value, except whatever personal satisfaction value an individual assigns to clarity. I wouldn’t even call the idea a conclusion as much as a way to describe the things I understand in a slightly more clear way. I still don’t particularly like ends-in-themselves.
Reduction time:
Why should I pursue clarity or donate to effective charities that are sub-optimal happiness-maximizers?
Because those are instrumental values.
Why should I pursue these instrumental values?
Because they lead to happiness and goodness.
Why should I pursue happiness and goodness?
Because they’re terminal values.
Why should I pursue these terminal values?
Wrong question. Terminal values, by definition, are ends-in-themselves. So here the real question is not why should I, but rather, why do I pursue them? It’s because the alien-god of evolution gave us emotions that make us want to be happy and good...
Why did the alien-god give us emotions?
The alien-god does not act rationally. There is no “why.” The origin of emotion is the result of random chance. We can explain only its propogation.
Why should we be controlled by emotions that originated through random chance?
Wrong question. It’s not a matter of whether they should control us. It’s a fact that they do.
I pretty much agree. But I have one quibble that I think is worth mentioning. Someone else could just say, “No, that’s not what morality is. True morality is...”.
Actually, let me give you a chance to respond to that before elaborating. How would you respond to someone who says this?
Very very well put. Much respect and applause.
One very small comment though:
I see where you’re coming from with this. If someone else heard this out of context they’d think, “No… emotion originates from evolutionary pressure”. But then you’d say, “Yeah, but where do the evolutionary pressures come from”. The other person would say, “Uh, ultimately the big bang I guess.” And you seem to be saying, “exactly, and that’s the result of random chance”.
Some math-y/physicist-y person might argue with you here about the big bang being random. I think you could provide a very valid bayesian counter argument saying that probability is in the mind, and that no one has a clue how the big bang/origin came to be, and so to anyone and everyone in this world, it is random.
Thanks :)
Yeah, I have no clue what evolutionary pressure means, or what the big-bang is, or any of that science stuff yet. sigh I really don’t enjoy reading hard science all that much, but I enjoy ignorance even less, so I’ll probably try to educate myself more about that stuff soon after I finish the rationality book.
Ok, that’s perfectly fair. My honest opinion is that it really isn’t very practical and if it doesn’t interest you, it probably isn’t worth it. The value of it is really just if you’re curious about the nature of reality on a fundamental level. But as far as what’s practical, I think it’s skills like breaking things down like a reductionist, open mindedness, knowledge of what biases we’re prone to etc.
Yeah, I guess one person has only so much time… at least for now… I am curious, but maybe not quite enough to justify the immense amount of time and effort it would take me to thoroughly understand.
Example case:
True morality is following God’s will? Basically everyone who says this believes “God wants what’s best for us, even when we don’t understand it.” Their understanding of God’s will and their intuitive idea of what’s best for people rarely conflict though. But here’s an extreme example of when it could: Let’s say someone strongly believes (even in belief) in God, and for some reason thinks that God wants him to sacrifice his child. This action would go against his (unrecognized) terminal value of goodness, but he could still do it, subconsciously satisfying his (unrecognized) terminal value of personal happiness. He takes comfort in his belief in God and heaven. He takes comfort in his community. To not sacrifice the child would be to deny God and lose that comfort. These thoughts obviously don’t happen on a conscious level, but they could be intuitions?
Idk, feel free to throw more “true morality is...” scenarios at me...
What if it does conflict? Does that then change what morality is?
And to play devils advocate, suppose the person says, “I don’t care what you say, true morality is following God’s will no matter what the effect is on goodness or happiness.” Hint: they’re not wrong.
I hope I’m not being annoying. I could just make my point if you want.
But it seems like morality is just a word people use to describe how they think they should act! People think they should act in all sorts of ways, but it seems to me like they’re subconsciously acting to achieve happiness and/or goodness.
As for your quote… such a person would be very rare, because almost anyone who defines morality as God’s will believes that God’s will is good for humanity, even if she doesn’t understand why. This belief, and acting in accordance with it, brings her happiness in the form of security. I don’t think anyone says to herself “God has an evil will, but I will serve him anyway.” Do you?
It often is. My point is that morality is just a word, and that it unfortunately doesn’t have a well agreed upon meaning. And so someone could always just say “but I define it this way”.
And so to ask what morality is is really just asking how you define it. On the other hand, asking what someone’s altruism or preference ratios are is a concrete question.
You seem to be making the point that in practice, peoples definitions of morality usually can be traced back to happiness or goodness, even if they don’t know or admit it. I sense that you’re right.
I doubt that there are many people who think that God has an evil will. But I could imagine that there are people who think that “even if I knew that God’s will was evil, following it would still be the right thing to do.”
Sure. But any definition of “right” that gives that result is more or less baked into in the definition of “God’s will” (e.g. “God’s will is, by definition, right!”), and it’s not the sort of “right” I care about.
I think that’s what it often comes down to.
Yay, I got your point. Morality is definitely a more ambiguous term. You’ve helped me realize I shouldn’t use it synonymously with goodness.
Yes, my point exactly.
I am trying really hard to imagine these people, and I can’t do it. Even if God’s will includes “justice” and killing anyone who doesn’t believe, even if it’s a baby whose only defect is “original sin,” people will still say that this “just” will of God’s is moral and right.
Hmm. Well you know a ton more about this than me so I believe you.
The way I’m (operationally) defining Preferences and words like happy/utility, Preferences are by definition what provides us what the most happiness/utility. Consider this thought experiment:
So the way I’m defining Preferences, it refers to how desirable a certain mind-state is relative to other possible mind-states.
Now think about consequentialism and how stuff leads to certain consequences. Part of the consequences is the mind-state it produces for you.
Say that:
Action 1 → mind-state A
Aciton 2 → mind-state B
Now remember mind-states could be ranked according to how preferable they are, like in the thought experiment. Suppose that mind-state A is preferable to mind-state B.
From this, it seems to me that the following conclusion is unavoidable:
In other words, Action 1 leads you to a state of mind that you prefer over the state of mind that Action 2 leads you to. I don’t see any ways around saying that.
To make it more concrete, let’s say that Action 1 is “going on vacation” and Action 2 is “giving to charity”.
IF going on vacation produces mind-state A.
IF giving to charity produces mind-state B.
IF mind-state A is preferable to mind-state B.
THEN going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to.
I call this “preferable”, but in this case words and semantics might just be distracting. As long as you agree that “going on vacation leads you to a mind-state that is preferable to the one that giving to charity leads you to” when the first three bullet points are true, I don’t think we disagree about anything real, and that we might just be using different words for stuff.
Thoughts?
I do, but mainly from a standpoint in being interested in human psychology. I also wonder from a standpoint of hoping that terminal goals aren’t arbitrary and that they have an actual reason for choosing what they choose, but I’ve never found their reasoning to be convincing, and I’ve never found their informational social influence to be strong enough evidence for me to think that terminal goals aren’t arbitrary.
:))) [big smile] (Because I hope what I’m about to tell you might address a lot of your concerns and make you really happy.)
I’m pleased to tell you that we all have “that altruism mutation”. Because of the way evolution works, we evolve to maximize the spread of our genes.
So imagine that there’s two Mom’s. They each have 5 kids, and they each enter an unfortunate situation where they have to choose between themselves an their kids.
Mom 1 is selfish and chooses to save herself. Her kids then die. She goes on to not have any more kids. Therefore, her genes don’t get spread at all.
Mom 2 is unselfish and chooses to save her kids. She dies, but her genes live on through her kids.
The outcome of this situation is that there are 0 organisms with selfish genes, and 5 with unselfish genes.
And so humans (and all other animals, from what I know) have evolved a very strong instinct to protect their kin. But as we know, preference ratios diminish rapidly from there. we might care about our friends and extended family, and a little less about our extended social group, and not so much about the rest of people (which is why we go out to eat instead of paying for meals for 100s of starving kids).
As far as evolution goes, this also makes sense. A mom that acts altruistically towards her social circle would gain respect, and the tribes respect may lead to them protecting that mom’s children, thus increasing the chances they survive and produces offspring themselves. Of course, that altruistic act by the mom may decrease her chances of surviving to produce more offspring and to take her of her current offspring, but it’s a trade-off.* On the other hand, acting altruistically towards a random tribe across the world is unlikely to improve her children’s chances of surviving and producing offspring, so the mom’s that did this have historically been less successful at spreading genes than the mom’s that didn’t.
*Note: using mathematical models to simulate and test these trade-offs is the hard part of studying evolution. The basic ideas are actually quite simple.
I’m really sorry to hear that. I hope my being sorry isn’t offensive in any way.
Not so! Science is all about using what you do know to make hypothesis about the world and to look for observable evidence to test them. And that seems to be exactly what you were doing :)
Your hypotheses and thought experiments are really impressive. I’m beginning to suspect that you do indeed have training and are denying this in order to make a status play. [joking]
I’d just like to offer a correction here for your knowledge. Mutations spread almost entirely because they a) increase the chances that you produce offspring or b) increase the chances that the offspring survive (and presumably produce offspring themselves).
You seem to be saying that the mutation would spread because the organism remains alive. Think about it—if an organism has a mutation that increases the chances that it remain alive but that doesn’t increase the chances of having viable offspring, then that mutation would only remain in the gene pool until he died. And so of all the bajillions of our ancestors, only the ones still alive are candidates for the type of evolution you describe (mutations that only increase your chance of survival).
Note: I’ve since realized that you may know this already, but figured I’d keep it anyway.
I got a “comment too long error” haha