A year ago, I thought honestly about my long term and short term goals. I concluded that “utilitarian concern for global human flourishing” was something like maybe 5-10% of my personal utility function. The rest is a combination of desire for personal happiness, and artistic development. Included with the “personal happiness” includes feeling like a good person, which I track separately from the “actually being a good person according to utilitarian ethics.” (It’s a lot easier to feel like a good person than be a good person. This is true even when I know that I’m only feeling like a good person)
I think utilitarian types need to be honest about what their values are. It lets you make more informed choices about what courses of action are long-term sustainable. And then decide either to craft a longterm plan you can manage (in which you continuously do reasonably good things for the world), or figure out how to spend your life going on various “binges” (i.e. spend a few months working full-time on an Effective Altruist project, then spend a few months vagabonding, or whathaveyou).
One my primary goals is to grow the Effective Altruist community in a responsible manner, and one thing I’ve been wondering is “can and should EA-people people created, or do you need to find people who naturally gravitate towards EA goals and just help them sort out their priorities?”
I was once a non-EA person, and I became an EA person over the course of 7 years. So it’s obviously possible for people who don’t currently identify as world-savers to change their values, or at least think that they’re changing their values. But trying to manage even that 5-10% function HAS been stressful for me and I’m not sure I’d have wanted someone to turn me into my present self without my consent.
This post seems like a pretty important data point. Though I’m not sure exactly how I should be updating.
Regardless, Diego—you seem like you’ve already made up your mind, but I do support you going off and vagabonding for a while (I plan to myself sometime in the future). I hope that afterwards you find some better longterm solutions that preserve all of your values.
I strongly approve of this. I think you can get way more value from a larger number of people who stick with being good to a smaller extent than from a tiny number who burn out.
I think utilitarian types need to be honest about what their values are.
Can you dissolve/unpack this a little?
One related thing I would have benefited from in the past (and possibly still would) is being more honest about how difficult it would be to put my values into practice.
Values is a tricky word (so tricky, in fact, that I think it would be reasonable to say that “values” aren’t actually a real thing in the first place). I’m using it approximately to mean “things that you care about.”
I want humanity to flourish and un-necessary suffering to end. But there’s a limit to my caring energy, and I have to divide it between “universal utilitarian good” and “things I personally want for myself.” Right now, UUG gets about 5-10% of my caring energy.
I would take a pill that increased UUG to getting 15-20% of my caring energy. (And in real life, this takes the form of investing myself in altruist communities, which reinforces my self-image as someone who does good things). But I honestly don’t have interest in becoming 100% altruist.
Part of me wants to be able to say “I’d take a pill that makes me 100% altruist, so that I only feel motivation to do the bare minimum of selfish-things to survive, and otherwise direct my energy to whatever accomplishes the most good.” It’s a nice thought to believe that about myself. But it’s not true. (If I became 5% more altruist, I might want to become an additional 5% more altruist, and maybe the cycle would repeated. Not sure. But if I got to take exactly one pill and then there would be no more pills after, I don’t think I’d choose to become more than 50% altruist).
One related thing I would have benefited from in the past (and possibly still would) is being more honest about how difficult it would be to put my values into practice.
Thanks—your comment implied a concrete example of what I was after: someone who thinks that they would take the 100% altruism pill when in fact they wouldn’t, isn’t being honest about their values. I found this helpful.
I think it would be reasonable to say that “values” aren’t actually a real thing in the first place). I’m using it approximately to mean “things that you care about.”
I’d hazard a guess that calling it something different doesn’t make it any realer, but we don’t need to get into that right now ;-)
EDIT: what I meant by “concrete” in this case was “without reifying values/preferences/caring etc.”
That is how I feel. I choose to test for two months how does it feel to be the maximum percent altruist. I got nearly 70%. But after the precommitment of two months ended, the whole story of this post was starting to emerge.
Now 0% and 80% sound emotionally alike, because I dug too deep.
A year ago, I thought honestly about my long term and short term goals. I concluded that “utilitarian concern for global human flourishing” was something like maybe 5-10% of my personal utility function. The rest is a combination of desire for personal happiness, and artistic development. Included with the “personal happiness” includes feeling like a good person, which I track separately from the “actually being a good person according to utilitarian ethics.” (It’s a lot easier to feel like a good person than be a good person. This is true even when I know that I’m only feeling like a good person)
I think utilitarian types need to be honest about what their values are. It lets you make more informed choices about what courses of action are long-term sustainable. And then decide either to craft a longterm plan you can manage (in which you continuously do reasonably good things for the world), or figure out how to spend your life going on various “binges” (i.e. spend a few months working full-time on an Effective Altruist project, then spend a few months vagabonding, or whathaveyou).
One my primary goals is to grow the Effective Altruist community in a responsible manner, and one thing I’ve been wondering is “can and should EA-people people created, or do you need to find people who naturally gravitate towards EA goals and just help them sort out their priorities?”
I was once a non-EA person, and I became an EA person over the course of 7 years. So it’s obviously possible for people who don’t currently identify as world-savers to change their values, or at least think that they’re changing their values. But trying to manage even that 5-10% function HAS been stressful for me and I’m not sure I’d have wanted someone to turn me into my present self without my consent.
This post seems like a pretty important data point. Though I’m not sure exactly how I should be updating.
Regardless, Diego—you seem like you’ve already made up your mind, but I do support you going off and vagabonding for a while (I plan to myself sometime in the future). I hope that afterwards you find some better longterm solutions that preserve all of your values.
I strongly approve of this. I think you can get way more value from a larger number of people who stick with being good to a smaller extent than from a tiny number who burn out.
Especially considering that other people might notice the burnouts and decide to not try being good.
Can you dissolve/unpack this a little?
One related thing I would have benefited from in the past (and possibly still would) is being more honest about how difficult it would be to put my values into practice.
Values is a tricky word (so tricky, in fact, that I think it would be reasonable to say that “values” aren’t actually a real thing in the first place). I’m using it approximately to mean “things that you care about.”
I want humanity to flourish and un-necessary suffering to end. But there’s a limit to my caring energy, and I have to divide it between “universal utilitarian good” and “things I personally want for myself.” Right now, UUG gets about 5-10% of my caring energy.
I would take a pill that increased UUG to getting 15-20% of my caring energy. (And in real life, this takes the form of investing myself in altruist communities, which reinforces my self-image as someone who does good things). But I honestly don’t have interest in becoming 100% altruist.
Part of me wants to be able to say “I’d take a pill that makes me 100% altruist, so that I only feel motivation to do the bare minimum of selfish-things to survive, and otherwise direct my energy to whatever accomplishes the most good.” It’s a nice thought to believe that about myself. But it’s not true. (If I became 5% more altruist, I might want to become an additional 5% more altruist, and maybe the cycle would repeated. Not sure. But if I got to take exactly one pill and then there would be no more pills after, I don’t think I’d choose to become more than 50% altruist).
That is also important, and slightly different.
Thanks—your comment implied a concrete example of what I was after: someone who thinks that they would take the 100% altruism pill when in fact they wouldn’t, isn’t being honest about their values. I found this helpful.
I’d hazard a guess that calling it something different doesn’t make it any realer, but we don’t need to get into that right now ;-)
EDIT: what I meant by “concrete” in this case was “without reifying values/preferences/caring etc.”
That is how I feel. I choose to test for two months how does it feel to be the maximum percent altruist. I got nearly 70%. But after the precommitment of two months ended, the whole story of this post was starting to emerge.
Now 0% and 80% sound emotionally alike, because I dug too deep.