I think this is oversimplified in a number of dimensions, to the point that I don’t think the framing makes particularly useful predictions or recommendations.
Particularly missing is types of caring, and scalability of assistance provided. I simply can’t help very many people move, and there’s almost zero individuals who I’d give cash to in amounts that I donate to larger orgs. Also, and incredibly important, is the idea of reciprocity—a lot of support for known, close individuals has actual return support (not legibly accountable, but still very real).
This makes it hard for me to put much belief behind the model when you use terms like “altruistic” or “selfish”. I suspect those terms CAN have useful meaning, but I don’t think this captures it.
Particularly missing is types of caring, and scalability of assistance provided.
I’m not sure what you mean. Would you mind elaborating?
I simply can’t help very many people move, and there’s almost zero individuals who I’d give cash to in amounts that I donate to larger orgs.
I’m not understanding what the implications of these things are.
Also, and incredibly important, is the idea of reciprocity—a lot of support for known, close individuals has actual return support (not legibly accountable, but still very real).
I think that would be factored in to expected utility. Like by helping a friend move, it probably leads to the friend wanting to scratch your back in the future, which increases the expected utility for you of helping them move.
I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.
I disagree that this conception of moral weights is directly related to “how altruistic” someone is. And perhaps even that “altruistic” is sufficiently small-dimensioned to be meaningfully compared across humans or actions.
I think that would be factored in to expected utility.
Sure, but so does everything else (including your expectations of stranger-wellness improvement).
I disagree that this conception of moral weights is directly related to “how altruistic” someone is. And perhaps even that “altruistic” is sufficiently small-dimensioned to be meaningfully compared across humans or actions.
Hm. I’m trying to think about what this means exactly and where our cruxes are, but I’m not sure about either. Let me give it an initial stab.
This feels like an argument over definitions. I’m saying “here is way of defining altruism that seems promising”. You’re saying “I don’t think that’s a good way to define altruism”. Furthermore, you’re saying that you think a good definition of altruism should be at least medium-dimensioned, whereas I’m proposing one that is single-dimensioned.
Context: I don’t think I have a great understanding of the A Human’s Guide To Words sequence. I’ve spent time reading and re-reading it over the years, but for some reason I don’t feel like I have a deep understanding of it. So please point out anything you think I might be missing.
My thoughts here are similar to what I said in this comment. That it’s useful to distinguish between 1) moral weights and 2) actions that one takes. It’s hard to say why exactly, but it feels right to me to call (1) “altruism”.
It’s also not immediately clear to me what other dimensions would be part of altruism, and so I’m having trouble thinking about whether or not a useful definition of altruism would have (many) other dimensions to it.
Sure, but so does everything else (including your expectations of stranger-wellness improvement).
Good point. Maybe it’d make sense then to, when thinking about altruism, kinda factor out the warm fuzzies you get from doing good. Because if you include those warm fuzzies (and whatever other desirable things), then maybe everything is selfish.
On second thought, maybe you can say that this actually hits on what altruism really is: the amount of warm fuzzies you get from doing good to others.
I think we’re disagreeing over the concepts and models that have words like “altruism” as handles, rather than over the words themselves. But they’re obviously related, so maybe it’s “just” the words, and we have no way to identify whether they’re similar concepts. There’s something about the cluster of concepts that the label “altruistic” invokes that gives me (and I believe others, since it was chosen as a big part of a large movement) some warm fuzzies—I’m not sure if I’m trying to analyze those fuzzies or the concepts themselves.
I (weakly) don’t think the utility model works very well in conjunction with altruism-is-good / (some)warm-fuzzies-are-suspect models. They’re talking about different levels of abstraction in human motivation. Neither are true, both are useful, but for different things.
My own thoughts and reactions are somewhat illegible to me, so I’m not certain this is my true objection. But I think our disagreement is what I mentioned above: Utility functions and cost-benefit calculations are tools for decisions and predictions, where “altruism” and moral judgements are orthogonal and not really measurable using the same tools.
I do consider myself somewhat altruistic, in that I’ll sacrifice a bit of my own comfort to (I hope and imagine) help near and distant strangers. And I want to encourage others to be that way as well. I don’t think framing it as “because my utility function includes terms for strangers” is more helpful nor more true than “because virtuous people help strangers”. And in the back of my mind I suspect there’s a fair bit of self-deception in that I mostly prefer it because that belief-agreement (or at least apparent agreement) makes my life easier and maintains my status in my main communities.
I do agree with your (and Tim Urban’s) observation that “emotional distance” is a thing, and it varies in import among people. I’ve often modeled it (for myself) as an inverse-square relationship about how much emotional investment I have based on informational distance (how often I interact with them), but that’s not quite right. I don’t agree with using this observation to measure altruism or moral judgement.
My own thoughts and reactions are somewhat illegible to me, so I’m not certain this is my true objection.
That makes sense. I feel like that happens to me sometimes as well.
But I think our disagreement is what I mentioned above: Utility functions and cost-benefit calculations are tools for decisions and predictions, where “altruism” and moral judgements are orthogonal and not really measurable using the same tools.
I see. That sounds correct. (And also probably isn’t worth diving into here.)
I do agree with your (and Tim Urban’s) observation that “emotional distance” is a thing, and it varies in import among people. I’ve often modeled it (for myself) as an inverse-square relationship about how much emotional investment I have based on informational distance (how often I interact with them), but that’s not quite right. I don’t agree with using this observation to measure altruism or moral judgement.
Gotcha. After posting and discussing in the comments a bit, this is something that I wish I had hit on in the post. That even if “altruism” isn’t quite the right concept, there’s probably some related concept (like “emotional distance”) that maps to what I discussed in the post.
I think this is oversimplified in a number of dimensions, to the point that I don’t think the framing makes particularly useful predictions or recommendations.
Particularly missing is types of caring, and scalability of assistance provided. I simply can’t help very many people move, and there’s almost zero individuals who I’d give cash to in amounts that I donate to larger orgs. Also, and incredibly important, is the idea of reciprocity—a lot of support for known, close individuals has actual return support (not legibly accountable, but still very real).
This makes it hard for me to put much belief behind the model when you use terms like “altruistic” or “selfish”. I suspect those terms CAN have useful meaning, but I don’t think this captures it.
I’m not sure what you mean. Would you mind elaborating?
I’m not understanding what the implications of these things are.
I think that would be factored in to expected utility. Like by helping a friend move, it probably leads to the friend wanting to scratch your back in the future, which increases the expected utility for you of helping them move.
I think I’m mostly responding to
I disagree that this conception of moral weights is directly related to “how altruistic” someone is. And perhaps even that “altruistic” is sufficiently small-dimensioned to be meaningfully compared across humans or actions.
Sure, but so does everything else (including your expectations of stranger-wellness improvement).
Hm. I’m trying to think about what this means exactly and where our cruxes are, but I’m not sure about either. Let me give it an initial stab.
This feels like an argument over definitions. I’m saying “here is way of defining altruism that seems promising”. You’re saying “I don’t think that’s a good way to define altruism”. Furthermore, you’re saying that you think a good definition of altruism should be at least medium-dimensioned, whereas I’m proposing one that is single-dimensioned.
Context: I don’t think I have a great understanding of the A Human’s Guide To Words sequence. I’ve spent time reading and re-reading it over the years, but for some reason I don’t feel like I have a deep understanding of it. So please point out anything you think I might be missing.
My thoughts here are similar to what I said in this comment. That it’s useful to distinguish between 1) moral weights and 2) actions that one takes. It’s hard to say why exactly, but it feels right to me to call (1) “altruism”.
It’s also not immediately clear to me what other dimensions would be part of altruism, and so I’m having trouble thinking about whether or not a useful definition of altruism would have (many) other dimensions to it.
Good point. Maybe it’d make sense then to, when thinking about altruism, kinda factor out the warm fuzzies you get from doing good. Because if you include those warm fuzzies (and whatever other desirable things), then maybe everything is selfish.
On second thought, maybe you can say that this actually hits on what altruism really is: the amount of warm fuzzies you get from doing good to others.
I think we’re disagreeing over the concepts and models that have words like “altruism” as handles, rather than over the words themselves. But they’re obviously related, so maybe it’s “just” the words, and we have no way to identify whether they’re similar concepts. There’s something about the cluster of concepts that the label “altruistic” invokes that gives me (and I believe others, since it was chosen as a big part of a large movement) some warm fuzzies—I’m not sure if I’m trying to analyze those fuzzies or the concepts themselves.
I (weakly) don’t think the utility model works very well in conjunction with altruism-is-good / (some)warm-fuzzies-are-suspect models. They’re talking about different levels of abstraction in human motivation. Neither are true, both are useful, but for different things.
Gotcha. If so, I’m not seeing it. Do you have any thoughts on where specifically we disagree?
My own thoughts and reactions are somewhat illegible to me, so I’m not certain this is my true objection. But I think our disagreement is what I mentioned above: Utility functions and cost-benefit calculations are tools for decisions and predictions, where “altruism” and moral judgements are orthogonal and not really measurable using the same tools.
I do consider myself somewhat altruistic, in that I’ll sacrifice a bit of my own comfort to (I hope and imagine) help near and distant strangers. And I want to encourage others to be that way as well. I don’t think framing it as “because my utility function includes terms for strangers” is more helpful nor more true than “because virtuous people help strangers”. And in the back of my mind I suspect there’s a fair bit of self-deception in that I mostly prefer it because that belief-agreement (or at least apparent agreement) makes my life easier and maintains my status in my main communities.
I do agree with your (and Tim Urban’s) observation that “emotional distance” is a thing, and it varies in import among people. I’ve often modeled it (for myself) as an inverse-square relationship about how much emotional investment I have based on informational distance (how often I interact with them), but that’s not quite right. I don’t agree with using this observation to measure altruism or moral judgement.
That makes sense. I feel like that happens to me sometimes as well.
I see. That sounds correct. (And also probably isn’t worth diving into here.)
Gotcha. After posting and discussing in the comments a bit, this is something that I wish I had hit on in the post. That even if “altruism” isn’t quite the right concept, there’s probably some related concept (like “emotional distance”) that maps to what I discussed in the post.