I think this whole discussion so far hides dangerous amounts of confusion around the concept of “good,” and any serious progress will involve unpacking this confusion in much more detail. Here are some other stories I think it’s important to have in the mix when thinking about this.
Goodness is about signaling: You know this one. In the ancestral environment people wanted to signal that they would make useful allies, which involves having properties like standing up for your friends, keeping your promises, etc. Perhaps they even wanted to signal that they would be good leaders of the tribe, which involves having properties like looking out for the well-being of the tribe. Also, humans are bad at lying. All this adds up to a strong incentive to signal both to yourself and to others that you care about doing things that are “good” = things that would make you a desirable ally or leader, or whatever.
Goodness is about coordinating decisions about who to back in social conflicts: This is the side-taking hypothesis of morality. Read the link for more details. This is maybe the most horrifying idea I’ve come across in the last year.
Goodness is an eldritch horror / egregore: Some crazy societal / cultural process indoctrinated you with this concept of “good” for reasons that have basically nothing to do with what you want. Cf. people who have been indoctrinated with communism or a religion, or fictional people living in a dystopia. There is just this distributed entity running on a bunch of humans propagating itself through virulent memes, and who knows what it’s optimizing for, but probably not what I want.
My story is some kind of complicated mix of these; many parts of it are nonverbal and verbalizing them would require some effort on my part. But if I had to try verbalizing, it might go something like this:
“Many people, including me, seem to have some concept of what it means for a person or action to be ‘good.’ It seems like a complicated concept and I notice I’m confused. When I try to label a person as ‘good’ or ‘bad,’ including myself, it feels like I am basically always making some kind of mistake, maybe a type error. I have some kind of desire to be able to label myself ‘good,’ which seems to come from some sense that if I am ‘good’ then I ‘deserve’ (another complicated concept I notice confusion around) to be happy, or ‘deserve’ other people’s love, or something like that.
This concept I have of ‘goodness’ came from somewhere, and I’m not sure I trust whatever process it came from. I have some sense that my desire to use it is protecting something, but whatever that is I’d rather work with it directly.
What seems a lot less complicated than ‘goodness’ or ‘badness’ is thinking about what I want. I want a lot of things, many of which involve other people. I have some sense of what it means for people to be able to trust each other and cooperate in a way that makes both of them better off, and a lot of what I want revolves around this; I want to be a trustworthy person who can cooperate with other people in ways that make both of us better off, so I can get other things I want. I also want to continue existing so I can get all the other things I want. I in fact don’t want to do a lot of the actions that I might naively want to label as ‘bad’ because they would make me less trustworthy and I don’t want that.
I have the sense that I’m made out of a bunch of parts that want different things, and those parts are still in the process of learning how to trust and cooperate with each other so they can all get more of what they want.”
One thing I’ve been playing with in the last few months is learning to stop being subject to the concept of goodness. It’s been very freeing; I feel a lot more capable of thinking through the actual consequences of my actions (including decision-theoretic consequences) and deciding if I want those consequences or not, as opposed to feeling shackled by a bunch of deontological constraints that were put into place by processes I don’t trust.
I highly agree with the overall premise of “you should take a step back from the frame this post is premised around”, and agree that each of the frames listed here is an important piece of a puzzle.
I do still feel like there’s a frame missing here, which is the “look at the deontological underpinnings Katja is pointing at at face value and take them seriously.” I think it’s a mistake to only look at those without the other frames you mention, but just as much of a mistake to not acknowledge them as an important element. I think your “here’s my attempt at verbalizing” does end up including a bit of this, but feels incomplete.
My summary of my-own-version of this is something like...
...
“It’s okay to want people to be better off, happy, fulfilled, independent of anything that has to do with cooperation. It’s okay to actually just think ‘yeah, there are suffering people in the world, or existential dangers on the horizon, and this is bad, and I have the power to help, and… it’s not quite right not call that an obligation, but it’s also not quite right to call that a whim or personal preference. It’s okay to think this is not just ‘a thing I want’, but something that I think is… deeply good in some way. Not objectively good, but important in some way.
Because I’m a monkey running on weird hardware, my intuitions about this will not always be consistent, and figuring out how to make them consistent is important, but just because they’re inconsistent doesn’t mean that they’re meaningless or suspect.”
I think theunitofcaring.tumblr.com is the the place that most consistently embodies the spirit I’m pointing at. There’s also a Rob Bensinger FB comment somewhere I can’t find that argues “Effective Altruism is an oblitunity” which is maybe the single most succinct explanation of it (and slightly more accurate-feeling that Nate’s altruistic motivations post).
So, yes, in addition to my own story I have more thoughts about what kind of story I want for people in general, roughly along these lines:
And he said – no, absolutely, stay in your career right now. In fact, his philosophy was that you should do exactly what you feel like all the time, and not worry about altruism at all, because eventually you’ll work through your own problems, and figure yourself out, and then you’ll just naturally become an effective altruist.
Or not, and that would also be fine.
I have strong intuitions about a thing which I’ll roughly label “not skipping developmental stages.” I think there is something like a developmental stage at which thinking about altruism is natural and won’t slowly corrupt your soul, and I worry about something like people not knowing what stage they’re at, not being at this stage, and trying to pretend to themselves and others that they are. The problem is roughly, I think most people are trying to do EA at Kegan 3, which is subject to tons of Goodharting / signaling issues, and it seems like a bad idea to me to seriously try to do EA until Kegan 4 or 5.
I hadn’t read that link on the side-taking hypothesis of morality before, but I note that if you find that argument interesting, you would like Gillian Hadfield’s book “Rules for a Flat World”. She talks about law (not “what courts and congress do” but broadly “the enterprise of subjecting human conduct to rules”) and emphasizes that law is similar to norms/morality, except in addition there is a canonical place that “the rules” get posted and also a canonical way to obtain a final arbitration about questions of “did person X break the rule?”. She emphasizes that these properties enable third-party enforcement of rules with much less assumption of personal risk (because otherwise, if there’s no final arbitration about whether a rule got broken, someone might punish *me* for punishing the rule-breaker). While other primates have altruism and even norms, they do not appear to have third-party enforcement. Anyway, consider this a book recommendation.
I’m a little perplexed about what you find horrifying about the side-taking hypothesis. In my view, the whole point of everything is basically to assemble the largest possible coalition of as many beings as we can possibly coordinate, using the best possible coordination mechanisms we collectively have access to, so that as many as possible of us can play this game and have a good time playing it for as long as we can. Of course we need to protect that coalition and defend it from its enemies, because there will always be enemies. But hopefully we can make there be fewer of them so that more of us can play.
If that’s the whole point of everything, then a system in which we can constantly make coordinated decisions about which side is “the big coalition of all of us” and keep the number of enemies to a minimum seems like *fantastic* technology and I want us all to be using it.
As a side note, I saw recently somewhere in the blogsphere a discussion about whether the development of human intelligence was fueled by advantages in creating laws (versus “breaking laws” or “some other reason”), but I don’t recall where that was and I would appreciate a reference if someone has one. The basic idea was that laws and morality both require a kind of abstract thinking—logical quantifiers like “for all people with property X” and “Y is allowed only if Z”—which, lo and behold, homo sapiens seems to have evolved for some reason, and that reason might’ve been to reason abstractly about social rules. (Indeed, people are much better at the Wason card-flipping task when policing a social rule rather than deducing abstract properties).
Wow. Huge respect for noticing that and then just saying it outright. That’s… hard to do. Or at least, rare.
Also the side-taking morality link is extremely thought-provoking; it led to one of those “wow how come I never thought of this before...” moments—thanks.
I notice that it feels easier for me to ask, “What is good?” than to ask, “What do I want?”
This is something that I don’t super endorse and am trying to investigate more. It’s also possible that I do have easy access to what I want, but my “Is this good?” sensor shoots down using that for decision making really fast.
I think this whole discussion so far hides dangerous amounts of confusion around the concept of “good,” and any serious progress will involve unpacking this confusion in much more detail. Here are some other stories I think it’s important to have in the mix when thinking about this.
Goodness is about signaling: You know this one. In the ancestral environment people wanted to signal that they would make useful allies, which involves having properties like standing up for your friends, keeping your promises, etc. Perhaps they even wanted to signal that they would be good leaders of the tribe, which involves having properties like looking out for the well-being of the tribe. Also, humans are bad at lying. All this adds up to a strong incentive to signal both to yourself and to others that you care about doing things that are “good” = things that would make you a desirable ally or leader, or whatever.
Goodness is about coordinating decisions about who to back in social conflicts: This is the side-taking hypothesis of morality. Read the link for more details. This is maybe the most horrifying idea I’ve come across in the last year.
Goodness is an eldritch horror / egregore: Some crazy societal / cultural process indoctrinated you with this concept of “good” for reasons that have basically nothing to do with what you want. Cf. people who have been indoctrinated with communism or a religion, or fictional people living in a dystopia. There is just this distributed entity running on a bunch of humans propagating itself through virulent memes, and who knows what it’s optimizing for, but probably not what I want.
My story is some kind of complicated mix of these; many parts of it are nonverbal and verbalizing them would require some effort on my part. But if I had to try verbalizing, it might go something like this:
“Many people, including me, seem to have some concept of what it means for a person or action to be ‘good.’ It seems like a complicated concept and I notice I’m confused. When I try to label a person as ‘good’ or ‘bad,’ including myself, it feels like I am basically always making some kind of mistake, maybe a type error. I have some kind of desire to be able to label myself ‘good,’ which seems to come from some sense that if I am ‘good’ then I ‘deserve’ (another complicated concept I notice confusion around) to be happy, or ‘deserve’ other people’s love, or something like that.
This concept I have of ‘goodness’ came from somewhere, and I’m not sure I trust whatever process it came from. I have some sense that my desire to use it is protecting something, but whatever that is I’d rather work with it directly.
What seems a lot less complicated than ‘goodness’ or ‘badness’ is thinking about what I want. I want a lot of things, many of which involve other people. I have some sense of what it means for people to be able to trust each other and cooperate in a way that makes both of them better off, and a lot of what I want revolves around this; I want to be a trustworthy person who can cooperate with other people in ways that make both of us better off, so I can get other things I want. I also want to continue existing so I can get all the other things I want. I in fact don’t want to do a lot of the actions that I might naively want to label as ‘bad’ because they would make me less trustworthy and I don’t want that.
I have the sense that I’m made out of a bunch of parts that want different things, and those parts are still in the process of learning how to trust and cooperate with each other so they can all get more of what they want.”
One thing I’ve been playing with in the last few months is learning to stop being subject to the concept of goodness. It’s been very freeing; I feel a lot more capable of thinking through the actual consequences of my actions (including decision-theoretic consequences) and deciding if I want those consequences or not, as opposed to feeling shackled by a bunch of deontological constraints that were put into place by processes I don’t trust.
I highly agree with the overall premise of “you should take a step back from the frame this post is premised around”, and agree that each of the frames listed here is an important piece of a puzzle.
I do still feel like there’s a frame missing here, which is the “look at the deontological underpinnings Katja is pointing at at face value and take them seriously.” I think it’s a mistake to only look at those without the other frames you mention, but just as much of a mistake to not acknowledge them as an important element. I think your “here’s my attempt at verbalizing” does end up including a bit of this, but feels incomplete.
My summary of my-own-version of this is something like...
...
“It’s okay to want people to be better off, happy, fulfilled, independent of anything that has to do with cooperation. It’s okay to actually just think ‘yeah, there are suffering people in the world, or existential dangers on the horizon, and this is bad, and I have the power to help, and… it’s not quite right not call that an obligation, but it’s also not quite right to call that a whim or personal preference. It’s okay to think this is not just ‘a thing I want’, but something that I think is… deeply good in some way. Not objectively good, but important in some way.
Because I’m a monkey running on weird hardware, my intuitions about this will not always be consistent, and figuring out how to make them consistent is important, but just because they’re inconsistent doesn’t mean that they’re meaningless or suspect.”
I think theunitofcaring.tumblr.com is the the place that most consistently embodies the spirit I’m pointing at. There’s also a Rob Bensinger FB comment somewhere I can’t find that argues “Effective Altruism is an oblitunity” which is maybe the single most succinct explanation of it (and slightly more accurate-feeling that Nate’s altruistic motivations post).
So, yes, in addition to my own story I have more thoughts about what kind of story I want for people in general, roughly along these lines:
Or not, and that would also be fine.
I have strong intuitions about a thing which I’ll roughly label “not skipping developmental stages.” I think there is something like a developmental stage at which thinking about altruism is natural and won’t slowly corrupt your soul, and I worry about something like people not knowing what stage they’re at, not being at this stage, and trying to pretend to themselves and others that they are. The problem is roughly, I think most people are trying to do EA at Kegan 3, which is subject to tons of Goodharting / signaling issues, and it seems like a bad idea to me to seriously try to do EA until Kegan 4 or 5.
I hadn’t read that link on the side-taking hypothesis of morality before, but I note that if you find that argument interesting, you would like Gillian Hadfield’s book “Rules for a Flat World”. She talks about law (not “what courts and congress do” but broadly “the enterprise of subjecting human conduct to rules”) and emphasizes that law is similar to norms/morality, except in addition there is a canonical place that “the rules” get posted and also a canonical way to obtain a final arbitration about questions of “did person X break the rule?”. She emphasizes that these properties enable third-party enforcement of rules with much less assumption of personal risk (because otherwise, if there’s no final arbitration about whether a rule got broken, someone might punish *me* for punishing the rule-breaker). While other primates have altruism and even norms, they do not appear to have third-party enforcement. Anyway, consider this a book recommendation.
I’m a little perplexed about what you find horrifying about the side-taking hypothesis. In my view, the whole point of everything is basically to assemble the largest possible coalition of as many beings as we can possibly coordinate, using the best possible coordination mechanisms we collectively have access to, so that as many as possible of us can play this game and have a good time playing it for as long as we can. Of course we need to protect that coalition and defend it from its enemies, because there will always be enemies. But hopefully we can make there be fewer of them so that more of us can play.
If that’s the whole point of everything, then a system in which we can constantly make coordinated decisions about which side is “the big coalition of all of us” and keep the number of enemies to a minimum seems like *fantastic* technology and I want us all to be using it.
As a side note, I saw recently somewhere in the blogsphere a discussion about whether the development of human intelligence was fueled by advantages in creating laws (versus “breaking laws” or “some other reason”), but I don’t recall where that was and I would appreciate a reference if someone has one. The basic idea was that laws and morality both require a kind of abstract thinking—logical quantifiers like “for all people with property X” and “Y is allowed only if Z”—which, lo and behold, homo sapiens seems to have evolved for some reason, and that reason might’ve been to reason abstractly about social rules. (Indeed, people are much better at the Wason card-flipping task when policing a social rule rather than deducing abstract properties).
I think there was a part of me that was still in some sense a moral realist and the side-taking hypothesis broke it.
Wow. Huge respect for noticing that and then just saying it outright. That’s… hard to do. Or at least, rare.
Also the side-taking morality link is extremely thought-provoking; it led to one of those “wow how come I never thought of this before...” moments—thanks.
I notice that it feels easier for me to ask, “What is good?” than to ask, “What do I want?”
This is something that I don’t super endorse and am trying to investigate more. It’s also possible that I do have easy access to what I want, but my “Is this good?” sensor shoots down using that for decision making really fast.