How would you respond if I said I’m a rationalist, however I don’t feel a strong motivation to make the world a better place?
To be clear, I do recognize making the world a better place a good thing, I just don’t feel much intrinsic motivation to actually do it.
I guess in part it’s because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.
Also, as far as I can tell, my psychological makeup is such that feeling, thinking or being told that I’m “obligated” to do something actually decreases my motivation. So the idea that “I’m supposed do that because it’s the ethical thing to do” doesn’t work for me either.
I do like the idea of making the world a better place as long as I can do that while doing something that inspires me or that I feel good about doing. Part of the reason, I think, is that I don’t see myself being able to do something I really don’t enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.
In the end, I estimate that I’m more likely to accomplish things with social benefit if I focus on my own needs and wait until I feel inspired to do something for others (or until there’s an overlap between meeting my needs and doing things for others),
rather than trying to force an intention to do things for others (and then feel I’m not being honest with myself and that I don’t actually have that intention).
The standard pledge for people in the rationalist sphere trying to make the world a better place is 10% of income to efficient charities, which if you’re making the typical kind of money for this site’s demographics, is closer to “token” than “difficult and thankless task”, even if it’s loads more than most people do.
Personally, my own response was to notice how little guilt I felt for not living up to moral obligations and decide I was evil and functionally become an egoist while still thinking of utilitarianism as “the true morality”.
I’m a rationalist, however I don’t feel a strong motivation to make the world a better place?
There is no connection between being a rationalist and trying to make the world a better place.
What is a “better place” is a function of your values, anyway. People tend to disagree about that and occasionally go to war to figure out their disagreement :-/
My own desire to “make the world a better place” is rather attenuated, rather local, generally restricted to people I know and like.
In my own case, I have concluded that human morality is purely inherited sentiment. So I do stuff that feels good to me and skip the rest. So I gave $5 and a hamburger to a homeless guy I saw at a fast food place I frequent, but feel no particular desire to identify a charity which is effective at feeding other homeless people. The guy I supported made it to a position in front of my face, which is all I need to get sentimental.
I love my family and my children and my friends. I’ll help them with stuff in interesting ways. If you want my help, figure out how to become my friend. Don’t try to convince me abstractly that you “deserve” it or that helping you is more “effective” then helping my already fairly well off family and friends.
So yeah, I think it is quite possible to be rational in the sense of wanting to figure out truth from falsehood, and to not be particularly altruistic in an abstract sense.
What you feel is perfectly normal. Humans are not automatically strategic; we use adaptations instead of maximizing values. Think about your brain as a machine built with some heuristics… it works okay on average, in the ancient jungle. Do not overestimate it; it does not have the magical power of doing the right thing. As a rationalist, you should see the limitations of your own mind.
If we want to achieve more, we have to be strategic (or have luck). Find out what realistically motivates you: (1) punishments and rewards, (2) peer pressure. This is your environment. It may support you in your goals, it may actively work against your goals, or it may just move you in a random direction. And you do not have a magical power to overcome that pressure.
All you can do is find a few moments of extraordinary willpower and clearness of mind, and use those moments strategically to (a) steer your life towards a better future, and (b) increase the probability of having these lucid moments in the future. For example, if your environment works against your goals, you may change your environment so it works less against you in the future. Or try to create a habit that would push you in the direction you want to be pushed. If you do it strategically for a longer time, these small changes may add together, and your life may change.
I do recognize making the world a better place a good thing, I just don’t feel much intrinsic motivation to actually do it.
This is what a human brain does when it does not receive social rewards (and possibly receives social punishments) for thinking about making the world a better place.
thinking or being told that I’m “obligated” to do something actually decreases my motivation
I guess in the past “being told you are obligated to something” was probably a good predictor of coming punishment (if you fail to fulfill your obligation). Also “obligation” often means that if you do it successfully, you will not receive a reward because, hey, you merely did your duty. Of course you hate these all-pain-no-gain obligations.
I don’t see myself being able to do something I really don’t enjoy for long enough that it produces meaningful results
That’s how human brain is built. You can’t enjoy something you don’t receive rewards for. The difference between humans is that some of them were trained to give themselves internal rewards for doing some stuff; then they can enjoy doing that stuff even without visible results.
I estimate that I’m more likely to accomplish things with social benefit if I focus on my own needs and wait until I feel inspired to do something for others
...or you could try to create some social reward system. Which is easier said than done, but maybe you could find a group of people with similar goals, tell each other about good stuff you did, and then provide to each other social rewards.
Human brain is designed to work according to some rules. You cannot overcome these rules, but you can try to change your environment so that these rules start working for you instead of against you.
A lot of this is very accurate, and a little depressing since I probably do need a social reward system, or a support network—and I don’t see an easy way to create one right now. :/
I do like having more clarity though, and understanding of what actually is the problem here.
As an example, I want to make a computer game. Programming has an advantage of providing a quick feedback, if you are doing it well. I decide to add a new feature, I write it, then I run the game, and I see the feature is there. I get some reward in form of seeing the new feature that works.
(And “doing it well” in this context means developing the program in small steps, where each step gives you some visible outcome. Small iterations. As opposed to doing some complex step that would take a lot of time while providing you no results until it is completed. Note that “visible outcome” does not necessarily mean something that is displayed on the screen during the normal run of the program. It is something that you as a programmer can see, for example a successful unit test result of a function that usually does not interact with the screen. I suspect that the impact of unit test on programmer’s morale is more important than its impact on the correctness of the code.)
But this is still just a feedback from a computer. There is no social feedback here. So I need another support layer to get that. I have friends who are also computer programmers. So whenever I add some new feature to the program, I send them the program along with the source code by e-mail. I do not expect them to inspect the source code too much; usually just to start the program and click on the new feature I have added. But I know they are programmers, and that the possibility of looking at the source code is there. Also, as programmers they can better understand and appreciate the features I have added. (To a non-programmer often trivial stuff seems very hard, but with the hard stuff they sometimes even don’t understand why that had to be done.) So now my programming has a social dimension, long before the program is finished. And we do it by e-mail (and a Skype talk once in a while, and meeting in person once in a month), so even everyday geographical proximity is not needed. Of course meeting more frequently in person would be even better.
You could try to find this kind of support here. Or anywhere else.
One important detail about this kind of “observer support” is that it works best if it provides you only positive feedback. That is, when you do something and send it, you get a “that’s nice!” reaction, and when you do not anything for a longer time, you only get a gentle reminder. (As opposed to people criticizing you “hey, it was five days and you did nothing, man, wake up” or even criticizing your progress as insufficient “all you did in three days was this lousy green rectangle, this way you will not complete it in thousand years”.) Any progress = good. Any lack of progress = neutral. There is nothing negative. (As a general rule, punishments are way overrated. They usually bring more harm than good, especially in long term.) Sometimes it is difficult to find people who give this kind of feedback; some people are not interested at all, some people are too eager and switch to slavemaster mode.
So, what would you like to have a social reward system for?
My own position is closer to ‘a human making the world a better place is only a reliable task in incremental local ways and inevitably goes wrong at large scales because our world map is inevitiably horribly horribly flawed no matter how hard we try to perfect it outisde extraordinarily narrow areas’ than ‘not much motivation to do it’. Totally get what you are saying though and it can result in similar results. There are a few simple ways to throw a bit of money around if that exists though (EG givewell) which are exactly such incremental local things.
Incidentally I would actually call for dissensus on how to make the world a better place. The more things people are trying the better the odds that something will actually work and then get picked up on.
I understand that a rationalist can potentially have any kind of goals, not necessary altruistic ones.
The reason for bringing this up is that I want to see if this kind of topic can be discussed here on LW, at all. And me being an (aspiring) rationalist is very relevant information to this.
Asking questions is one of the most rational things you can do. So screw “LessWrong”. If some people aren’t willing to discuss an issue with you like adults then you can’t really call them rational. They should just quit to a photography blog or something.
There’s a concept called “Right Action”—Acting by using your logic to fulfill your values. We all have things that scare us, bore us , etc, but ultimately you can make the choice to act on what youultimately value. Sometimes, you just choose to do what you think is right, regardless of how you feel.
One thing that could help is to remove the word “should” for your mental vocabulary - As per above, every moment is a choice. You get to choose whether to act on what you value. This takes “saving the world” from something that is repelling because of obligation, to something that is compelling because of choice.
One other thing that might help is to remove any thoughts of “making the world a better place” out of your mind. This is a huge goal, it’s daunting, and it’s not actionable. Instead, what might work is to focus on a particular project, and even then, only the very next action to take. I have a long term plan to make the world a better place, but “making the world a better place” almost never enters my day to day thoughts except as a reminder of WHY i’m taking those small, individual actions.
Finally, something that’s helped me is to think about emotional and willpower sustainability (which you talk about at the very bottom). There’s a few things you can do in that regard. Firstly, find a project to focus on that excites you and is mostly work that you enjoy. Secondly, if you’re doing something that is boring/scary/unfulfilling to you (as every project sometimes requires) see if you can delegate it. Thirdly, If you can’t delegate it, make sure to take breaks and give yourself permission to do things that recharge you.
Human beings derive joy from doing good. Studies on happiness find that this is one of the bigger correlates of happiness. If you’re at all normal, there’s probably a lot of room for you to do more good and be happier.
As for intrinsic motivation and System I… it’s difficult, updating your System I isn’t as straightforward as updating your System 2 (aka using evidence to update your beliefs). One day I plan on writing a post about this...
However, there are some things I’d like to note:
I guess in part it’s because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.
I don’t think it’s that difficult or thankless (although I’m definitely in the minority here and I don’t know anyone as optimistic on this front as I am, so take that for what you will). For example, take this very website/community. There’s tons of relatively simple and straightforward improvements that could be made that I think would have a relatively high impact. Like making the website easier to use and including new features. For example, adding a section that makes it easy for LWers to brainstorm and collaborate on projects. That’s a high level action that I could see trickling down and having a big impact. And if you’re talking “genuinely” as in making fundamental changes to the way things work… I’ve got some thoughts here.
Also, as far as I can tell, my psychological makeup is such that feeling, thinking or being told that I’m “obligated” to do something actually decreases my motivation. So the idea that “I’m supposed do that because it’s the ethical thing to do” doesn’t work for me either.
Me too :/. I think that it’s easy to give this spite too much weight as you make decisions. To some extent, I think it’s ok to “let the spite be”. Trying to exert complete control over these sorts of emotions is too stressful. Whatever marginal gains you make in making your emotions “more accurate”, it’s probably outweighed by the stress it causes. Finding the right balance is difficult though.
I do like the idea of making the world a better place as long as I can do that while doing something that inspires me or that I feel good about doing. Part of the reason, I think, is that I don’t see myself being able to do something I really don’t enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.
I think that you’d be more motivated if a) you thought you had a better chance at succeeding and b) recognized how big an impact altruism probably has on your happiness.
I don’t know how to feel about that.
For the record, I admire your honest attempts at introspection and truth.
How would you respond if I said I’m a rationalist, however I don’t feel a strong motivation to make the world a better place?
With just this information, I’d likely say that being an aspiring rationalist doesn’t really have anything to do with your goals, as its mostly about methods of reaching your goals, rather than telling you what your goals should be.
Following it up with this:
To be clear, I do recognize making the world a better place a good thing, I just don’t feel much intrinsic motivation to actually do it.
Confuses me a bit, however.
If one of your goals is making the world a better place (that’s how I’d rephrase the statement: “I do recognize making the world a better place is a good thing,” saying as saying things like “X is good” generally means “X is a desirable state of the world we should strive for), your intrinsic motivation shouldn’t matter one bit.
I have little intrinsic motivation of eating healthy. Preparing food is boring to me and I don’t particularly enjoy eating most healthy things. I still try to eat healthy, because one of my goals is living for a very, very long time.
I guess in part it’s because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.
One the one hand: How difficult is it to give 10% (or even 5 or 1 percent, if your income is very low) to an effective charity?
On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn’t become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.
Part of the reason, I think, is that I don’t see myself being able to do something I really don’t enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.
This is one of the many reasons why effective altruism works. It allows you to contribute to big problems, while you’re doing something you enjoy and are good at.
(Or we can wait for /u/blacktrance to come in and try to convince you that egoism is the right way to go.)
On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn’t become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.
Historical, isn’t that exactly how the world became a better place? Better technology and better institutions are the ingredients of reduced suffering, and both of these see to have developed by people pursuing solutions to their own (very local) problems, like how to make money and how to stop the government from abusing you. Even scientists who work far upstream of any application seem to be more motivated by curiosity and fame than a desire to reduce global suffering.
Of course, modern wealth disparities may have changed the situation. But we should be clear, if we think that we’ve entered a new historical phase in which the largest future reductions in suffering are going to come from globally-altruistic motivations.
Yes. Richer states can afford to transfer more wealth. We see this in the size of modern (domestic) welfare states, which could not have been shouldered even a century ago.
If one of your goals is making the world a better place (that’s how I’d rephrase the statement: “I do recognize making the world a better place is a good thing,” saying as saying things like “X is good” generally means “X is a desirable state of the world we should strive for), your intrinsic motivation shouldn’t matter one bit.
That’s not exactly what I meant, but nevertheless this is a good point.
On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn’t become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.
Ok, let’s play this out.
As I already said, I have good reason to believe that “should-based” motivation wouldn’t work for me.
So what I’m wondering is, am I allowed to say “due to the way my mind currently works I’m choosing to optimize X by not actively committing to doing X” without running into the “you’re not trying hard enough” kind of argument?
Just because some people do things in a particular way doesn’t mean I can or should to try and do things the same way. It may simply not work for me. This may include thinking in a certain way or having a particular mindset.
So what I’m wondering is, am I allowed to say “due to the way my mind currently works I’m choosing to optimize X by not actively committing to doing X” without running into the “you’re not trying hard enough” kind of argument?
I’d say yes, even if it would only be to prevent worse things.
This might be a similar situation. If you choice is doing nothing vs doing something, doing something is pretty much always better. (Assuming you do useful things, but let’s take that for granted for now.)
If you follow the standard Less Wrong interpretation of utilitarianism, you’re pretty much never doing enough to improve the world. Of course no-one actually holds you to such unreasonable standards, because doing so would be pretty insane. If you tried to be a perfect utility maximizer, you’d end up paralyzed with decision fear, anxiety and/or depression and that doesn’t get us anywhere at all.
Since I’m quoting people, here’s a useful quote to have come out the tumblr rationalists:
Maybe saying “Alright, I’ll give 10% of my income and we call it that,” doesn’t work for you, for whatever reason. Of course you’re allowed to figure out something else that does work for you. That’s what rationality is all about. Reaching your goals, even if the standard approach doesn’t work for me.
That being said, it might still be interesting to see if changing the way your mind works isn’t easier. (It probably isn’t, but just in case...) From what you describe, it sounds like a form of akrasia which you might be able to work around in other ways than a variant of planned procrastination
How would you respond if I said I’m a rationalist, however I don’t feel a strong motivation to make the world a better place?
To be clear, I do recognize making the world a better place a good thing, I just don’t feel much intrinsic motivation to actually do it.
I guess in part it’s because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.
Also, as far as I can tell, my psychological makeup is such that feeling, thinking or being told that I’m “obligated” to do something actually decreases my motivation. So the idea that “I’m supposed do that because it’s the ethical thing to do” doesn’t work for me either.
I do like the idea of making the world a better place as long as I can do that while doing something that inspires me or that I feel good about doing. Part of the reason, I think, is that I don’t see myself being able to do something I really don’t enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.
In the end, I estimate that I’m more likely to accomplish things with social benefit if I focus on my own needs and wait until I feel inspired to do something for others (or until there’s an overlap between meeting my needs and doing things for others), rather than trying to force an intention to do things for others (and then feel I’m not being honest with myself and that I don’t actually have that intention).
I don’t know how to feel about that.
The standard pledge for people in the rationalist sphere trying to make the world a better place is 10% of income to efficient charities, which if you’re making the typical kind of money for this site’s demographics, is closer to “token” than “difficult and thankless task”, even if it’s loads more than most people do.
Personally, my own response was to notice how little guilt I felt for not living up to moral obligations and decide I was evil and functionally become an egoist while still thinking of utilitarianism as “the true morality”.
That’s interesting, and I can relate to some of what you said. Thank you for sharing.
There is no connection between being a rationalist and trying to make the world a better place.
What is a “better place” is a function of your values, anyway. People tend to disagree about that and occasionally go to war to figure out their disagreement :-/
My own desire to “make the world a better place” is rather attenuated, rather local, generally restricted to people I know and like.
In my own case, I have concluded that human morality is purely inherited sentiment. So I do stuff that feels good to me and skip the rest. So I gave $5 and a hamburger to a homeless guy I saw at a fast food place I frequent, but feel no particular desire to identify a charity which is effective at feeding other homeless people. The guy I supported made it to a position in front of my face, which is all I need to get sentimental.
I love my family and my children and my friends. I’ll help them with stuff in interesting ways. If you want my help, figure out how to become my friend. Don’t try to convince me abstractly that you “deserve” it or that helping you is more “effective” then helping my already fairly well off family and friends.
So yeah, I think it is quite possible to be rational in the sense of wanting to figure out truth from falsehood, and to not be particularly altruistic in an abstract sense.
What you feel is perfectly normal. Humans are not automatically strategic; we use adaptations instead of maximizing values. Think about your brain as a machine built with some heuristics… it works okay on average, in the ancient jungle. Do not overestimate it; it does not have the magical power of doing the right thing. As a rationalist, you should see the limitations of your own mind.
If we want to achieve more, we have to be strategic (or have luck). Find out what realistically motivates you: (1) punishments and rewards, (2) peer pressure. This is your environment. It may support you in your goals, it may actively work against your goals, or it may just move you in a random direction. And you do not have a magical power to overcome that pressure.
All you can do is find a few moments of extraordinary willpower and clearness of mind, and use those moments strategically to (a) steer your life towards a better future, and (b) increase the probability of having these lucid moments in the future. For example, if your environment works against your goals, you may change your environment so it works less against you in the future. Or try to create a habit that would push you in the direction you want to be pushed. If you do it strategically for a longer time, these small changes may add together, and your life may change.
This is what a human brain does when it does not receive social rewards (and possibly receives social punishments) for thinking about making the world a better place.
I guess in the past “being told you are obligated to something” was probably a good predictor of coming punishment (if you fail to fulfill your obligation). Also “obligation” often means that if you do it successfully, you will not receive a reward because, hey, you merely did your duty. Of course you hate these all-pain-no-gain obligations.
That’s how human brain is built. You can’t enjoy something you don’t receive rewards for. The difference between humans is that some of them were trained to give themselves internal rewards for doing some stuff; then they can enjoy doing that stuff even without visible results.
...or you could try to create some social reward system. Which is easier said than done, but maybe you could find a group of people with similar goals, tell each other about good stuff you did, and then provide to each other social rewards.
Human brain is designed to work according to some rules. You cannot overcome these rules, but you can try to change your environment so that these rules start working for you instead of against you.
I think your analysis is largely correct.
A lot of this is very accurate, and a little depressing since I probably do need a social reward system, or a support network—and I don’t see an easy way to create one right now. :/
I do like having more clarity though, and understanding of what actually is the problem here.
As an example, I want to make a computer game. Programming has an advantage of providing a quick feedback, if you are doing it well. I decide to add a new feature, I write it, then I run the game, and I see the feature is there. I get some reward in form of seeing the new feature that works.
(And “doing it well” in this context means developing the program in small steps, where each step gives you some visible outcome. Small iterations. As opposed to doing some complex step that would take a lot of time while providing you no results until it is completed. Note that “visible outcome” does not necessarily mean something that is displayed on the screen during the normal run of the program. It is something that you as a programmer can see, for example a successful unit test result of a function that usually does not interact with the screen. I suspect that the impact of unit test on programmer’s morale is more important than its impact on the correctness of the code.)
But this is still just a feedback from a computer. There is no social feedback here. So I need another support layer to get that. I have friends who are also computer programmers. So whenever I add some new feature to the program, I send them the program along with the source code by e-mail. I do not expect them to inspect the source code too much; usually just to start the program and click on the new feature I have added. But I know they are programmers, and that the possibility of looking at the source code is there. Also, as programmers they can better understand and appreciate the features I have added. (To a non-programmer often trivial stuff seems very hard, but with the hard stuff they sometimes even don’t understand why that had to be done.) So now my programming has a social dimension, long before the program is finished. And we do it by e-mail (and a Skype talk once in a while, and meeting in person once in a month), so even everyday geographical proximity is not needed. Of course meeting more frequently in person would be even better.
You could try to find this kind of support here. Or anywhere else.
One important detail about this kind of “observer support” is that it works best if it provides you only positive feedback. That is, when you do something and send it, you get a “that’s nice!” reaction, and when you do not anything for a longer time, you only get a gentle reminder. (As opposed to people criticizing you “hey, it was five days and you did nothing, man, wake up” or even criticizing your progress as insufficient “all you did in three days was this lousy green rectangle, this way you will not complete it in thousand years”.) Any progress = good. Any lack of progress = neutral. There is nothing negative. (As a general rule, punishments are way overrated. They usually bring more harm than good, especially in long term.) Sometimes it is difficult to find people who give this kind of feedback; some people are not interested at all, some people are too eager and switch to slavemaster mode.
So, what would you like to have a social reward system for?
That’s interesting. Thank you for a detailed explanation of this.
I can agree a lot with the “only positive/neutral feedback” rule.
I’m not sure, but this got me thinking in a good way. I like this question.
My own position is closer to ‘a human making the world a better place is only a reliable task in incremental local ways and inevitably goes wrong at large scales because our world map is inevitiably horribly horribly flawed no matter how hard we try to perfect it outisde extraordinarily narrow areas’ than ‘not much motivation to do it’. Totally get what you are saying though and it can result in similar results. There are a few simple ways to throw a bit of money around if that exists though (EG givewell) which are exactly such incremental local things.
Incidentally I would actually call for dissensus on how to make the world a better place. The more things people are trying the better the odds that something will actually work and then get picked up on.
If I were to take a reductionalist approach, what’s the connection between rationality and making the world a better place.
I understand that a rationalist can potentially have any kind of goals, not necessary altruistic ones.
The reason for bringing this up is that I want to see if this kind of topic can be discussed here on LW, at all. And me being an (aspiring) rationalist is very relevant information to this.
Asking questions is one of the most rational things you can do. So screw “LessWrong”. If some people aren’t willing to discuss an issue with you like adults then you can’t really call them rational. They should just quit to a photography blog or something.
A few thoughts here:
There’s a concept called “Right Action”—Acting by using your logic to fulfill your values. We all have things that scare us, bore us , etc, but ultimately you can make the choice to act on what youultimately value. Sometimes, you just choose to do what you think is right, regardless of how you feel.
One thing that could help is to remove the word “should” for your mental vocabulary - As per above, every moment is a choice. You get to choose whether to act on what you value. This takes “saving the world” from something that is repelling because of obligation, to something that is compelling because of choice.
One other thing that might help is to remove any thoughts of “making the world a better place” out of your mind. This is a huge goal, it’s daunting, and it’s not actionable. Instead, what might work is to focus on a particular project, and even then, only the very next action to take. I have a long term plan to make the world a better place, but “making the world a better place” almost never enters my day to day thoughts except as a reminder of WHY i’m taking those small, individual actions.
Finally, something that’s helped me is to think about emotional and willpower sustainability (which you talk about at the very bottom). There’s a few things you can do in that regard. Firstly, find a project to focus on that excites you and is mostly work that you enjoy. Secondly, if you’re doing something that is boring/scary/unfulfilling to you (as every project sometimes requires) see if you can delegate it. Thirdly, If you can’t delegate it, make sure to take breaks and give yourself permission to do things that recharge you.
Human beings derive joy from doing good. Studies on happiness find that this is one of the bigger correlates of happiness. If you’re at all normal, there’s probably a lot of room for you to do more good and be happier.
As for intrinsic motivation and System I… it’s difficult, updating your System I isn’t as straightforward as updating your System 2 (aka using evidence to update your beliefs). One day I plan on writing a post about this...
However, there are some things I’d like to note:
I don’t think it’s that difficult or thankless (although I’m definitely in the minority here and I don’t know anyone as optimistic on this front as I am, so take that for what you will). For example, take this very website/community. There’s tons of relatively simple and straightforward improvements that could be made that I think would have a relatively high impact. Like making the website easier to use and including new features. For example, adding a section that makes it easy for LWers to brainstorm and collaborate on projects. That’s a high level action that I could see trickling down and having a big impact. And if you’re talking “genuinely” as in making fundamental changes to the way things work… I’ve got some thoughts here.
Me too :/. I think that it’s easy to give this spite too much weight as you make decisions. To some extent, I think it’s ok to “let the spite be”. Trying to exert complete control over these sorts of emotions is too stressful. Whatever marginal gains you make in making your emotions “more accurate”, it’s probably outweighed by the stress it causes. Finding the right balance is difficult though.
I think that you’d be more motivated if a) you thought you had a better chance at succeeding and b) recognized how big an impact altruism probably has on your happiness.
For the record, I admire your honest attempts at introspection and truth.
That’s pretty much my attitude as well.
With just this information, I’d likely say that being an aspiring rationalist doesn’t really have anything to do with your goals, as its mostly about methods of reaching your goals, rather than telling you what your goals should be.
Following it up with this:
Confuses me a bit, however.
If one of your goals is making the world a better place (that’s how I’d rephrase the statement: “I do recognize making the world a better place is a good thing,” saying as saying things like “X is good” generally means “X is a desirable state of the world we should strive for), your intrinsic motivation shouldn’t matter one bit.
I have little intrinsic motivation of eating healthy. Preparing food is boring to me and I don’t particularly enjoy eating most healthy things. I still try to eat healthy, because one of my goals is living for a very, very long time.
One the one hand: How difficult is it to give 10% (or even 5 or 1 percent, if your income is very low) to an effective charity?
On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn’t become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.
This is one of the many reasons why effective altruism works. It allows you to contribute to big problems, while you’re doing something you enjoy and are good at.
(Or we can wait for /u/blacktrance to come in and try to convince you that egoism is the right way to go.)
Historical, isn’t that exactly how the world became a better place? Better technology and better institutions are the ingredients of reduced suffering, and both of these see to have developed by people pursuing solutions to their own (very local) problems, like how to make money and how to stop the government from abusing you. Even scientists who work far upstream of any application seem to be more motivated by curiosity and fame than a desire to reduce global suffering.
Of course, modern wealth disparities may have changed the situation. But we should be clear, if we think that we’ve entered a new historical phase in which the largest future reductions in suffering are going to come from globally-altruistic motivations.
Compared to what, medieval Europe?
Yes. Richer states can afford to transfer more wealth. We see this in the size of modern (domestic) welfare states, which could not have been shouldered even a century ago.
Well, Rome was basically a welfare state two millennia ago.
That’s not exactly what I meant, but nevertheless this is a good point.
Ok, let’s play this out.
As I already said, I have good reason to believe that “should-based” motivation wouldn’t work for me.
So what I’m wondering is, am I allowed to say “due to the way my mind currently works I’m choosing to optimize X by not actively committing to doing X” without running into the “you’re not trying hard enough” kind of argument?
Just because some people do things in a particular way doesn’t mean I can or should to try and do things the same way. It may simply not work for me. This may include thinking in a certain way or having a particular mindset.
I’d say yes, even if it would only be to prevent worse things.
To quote one of Yvain’s recent posts:
This might be a similar situation. If you choice is doing nothing vs doing something, doing something is pretty much always better. (Assuming you do useful things, but let’s take that for granted for now.)
If you follow the standard Less Wrong interpretation of utilitarianism, you’re pretty much never doing enough to improve the world. Of course no-one actually holds you to such unreasonable standards, because doing so would be pretty insane. If you tried to be a perfect utility maximizer, you’d end up paralyzed with decision fear, anxiety and/or depression and that doesn’t get us anywhere at all.
Since I’m quoting people, here’s a useful quote to have come out the tumblr rationalists:
To make that more specific to your own situation:
Maybe saying “Alright, I’ll give 10% of my income and we call it that,” doesn’t work for you, for whatever reason. Of course you’re allowed to figure out something else that does work for you. That’s what rationality is all about. Reaching your goals, even if the standard approach doesn’t work for me.
That being said, it might still be interesting to see if changing the way your mind works isn’t easier. (It probably isn’t, but just in case...) From what you describe, it sounds like a form of akrasia which you might be able to work around in other ways than a variant of planned procrastination