(Splitting replies on different parts into different subthreads.)
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the “bare standard of human decency” is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)
For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The “insufficiently horrified” framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.
Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the “bare standard of human decency” is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)
I share your problem with purity ethics… I almost agree with this? Frankly, I have some issue with using the claim “a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!” and juxtaposing it with something kind-of like the claim “it’s alright to not be very utilitarian!” The claims kind of invalidate each other. Don’t get me wrong, there’s definitely some sort of ethical pareto-frontier where you balance the strength of each claim individually but, unless that’s qualified, I’m not thrilled.
For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The “insufficiently horrified” framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.
There are two things going on here—the actual action of meat consumption and the internal characterization of horror. Actions that involve consuming less meat might point to short-term ethical improvements but people who are horrified of consuming meat point to much longer-term ethical improvements. If I had a choice between two people who cut meat by two-thirds and the same people doing the same thing while also kinda being horrified of what they’re doing, I’d choose the latter.
Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?
For similar reasons, I’d prefer one vegan over two people who’d cut meat by 2⁄3. Being vegan points to a level of experienced horror and that points to them being a long-term ethical ally. Cutting meat by 2⁄3 points towards people who are kinda uncomfortable with animal suffering (but more likely health concerns tbh) but who probably aren’t going to take any significantly helpful actions about it.
And in reverse, I’d prefer one meat-eater on the margin who does it out of physical necessity but is horrified by it to a vegan who does it because that’s how they grew up. The long-term implication of the horror is sometimes better than the direct consequence of the action.
Thank you for confirming. I wanted to be sure I wasn’t putting words in your mouth.
I think I just have a very different model than you of what most people tend to do when they’re constantly horrified by their own actions.
I’m sorry about the animal welfare relevance of this analogy, but it’s the best one I have:
The difference between positive reinforcement and punishment is staggering; you can train a circus animal to do complex tricks using either method, but only under the positive reinforcement method will the animal voluntarily engage further with the trainer. Train an animal with punishment and it will tend to avoid further training, will escape the circus if at all possible.
This is why I think your psychology is unusual. I expect a typical person filled with horror about a behavior to change that behavior for a while (do the trained trick), but eventually find a way to not think about it (avoid the trainer) or change their beliefs in order to not find it horrible any longer (escape the circus). I can believe that your personal history makes the horror an extremely motivating force for you. I just don’t think that’s the default way for people to respond to those sort of experiences and feelings.
It’s also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions. And I expect that to help most people go farther.
Huh… I think the crux of our differences here is that I don’t view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior—I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn’t really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn’t modify the ethical framework/ultimate actions really perturbs me.
Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just “positive reinforcement vs punishment” (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.
I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That’s not quite true, but it’s more true than the idea of a human as a unitary agent.
I’m mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn’t go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.
Reframing things to myself, in ways that don’t change the truth value but do change the emphasis, is very useful. Other parts of me don’t necessarily speak logic, but they do speak metaphor.
I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
Thanks for confirming. For what it’s worth, I can envision your experience being a somewhat frequent one (and I think it’s probably actually more common among rationalists than the average Jo). It’s somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There’s no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it’s easy.
Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.
I’ll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it’s definitely led to some unusual psychology—I’m planning on doing a post on it one of these days.
It’s also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions.
I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you’d have to be using some really nonstandard utilitarianism.
(Splitting replies on different parts into different subthreads.)
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the “bare standard of human decency” is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)
For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their meat/egg consumption by two-thirds. The “insufficiently horrified” framing makes it sound like neither of the two people in the latter case really count, while at least one person in the former does count.
Do you agree (without getting into which outcome is easier for activism to achieve) that the latter outcome is preferable to the former? And separately, does it aesthetically feel better or worse?
I share your problem with purity ethics… I almost agree with this? Frankly, I have some issue with using the claim “a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!” and juxtaposing it with something kind-of like the claim “it’s alright to not be very utilitarian!” The claims kind of invalidate each other. Don’t get me wrong, there’s definitely some sort of ethical pareto-frontier where you balance the strength of each claim individually but, unless that’s qualified, I’m not thrilled.
There are two things going on here—the actual action of meat consumption and the internal characterization of horror. Actions that involve consuming less meat might point to short-term ethical improvements but people who are horrified of consuming meat point to much longer-term ethical improvements. If I had a choice between two people who cut meat by two-thirds and the same people doing the same thing while also kinda being horrified of what they’re doing, I’d choose the latter.
For similar reasons, I’d prefer one vegan over two people who’d cut meat by 2⁄3. Being vegan points to a level of experienced horror and that points to them being a long-term ethical ally. Cutting meat by 2⁄3 points towards people who are kinda uncomfortable with animal suffering (but more likely health concerns tbh) but who probably aren’t going to take any significantly helpful actions about it.
And in reverse, I’d prefer one meat-eater on the margin who does it out of physical necessity but is horrified by it to a vegan who does it because that’s how they grew up. The long-term implication of the horror is sometimes better than the direct consequence of the action.
Thank you for confirming. I wanted to be sure I wasn’t putting words in your mouth.
I think I just have a very different model than you of what most people tend to do when they’re constantly horrified by their own actions.
I’m sorry about the animal welfare relevance of this analogy, but it’s the best one I have:
The difference between positive reinforcement and punishment is staggering; you can train a circus animal to do complex tricks using either method, but only under the positive reinforcement method will the animal voluntarily engage further with the trainer. Train an animal with punishment and it will tend to avoid further training, will escape the circus if at all possible.
This is why I think your psychology is unusual. I expect a typical person filled with horror about a behavior to change that behavior for a while (do the trained trick), but eventually find a way to not think about it (avoid the trainer) or change their beliefs in order to not find it horrible any longer (escape the circus). I can believe that your personal history makes the horror an extremely motivating force for you. I just don’t think that’s the default way for people to respond to those sort of experiences and feelings.
It’s also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions. And I expect that to help most people go farther.
Huh… I think the crux of our differences here is that I don’t view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior—I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn’t really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn’t modify the ethical framework/ultimate actions really perturbs me.
Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just “positive reinforcement vs punishment” (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.
I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That’s not quite true, but it’s more true than the idea of a human as a unitary agent.
I’m mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn’t go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.
Reframing things to myself, in ways that don’t change the truth value but do change the emphasis, is very useful. Other parts of me don’t necessarily speak logic, but they do speak metaphor.
I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
Thanks for confirming. For what it’s worth, I can envision your experience being a somewhat frequent one (and I think it’s probably actually more common among rationalists than the average Jo). It’s somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There’s no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it’s easy.
Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.
I’ll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it’s definitely led to some unusual psychology—I’m planning on doing a post on it one of these days.
I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you’d have to be using some really nonstandard utilitarianism.
Of course you shouldn’t plan to reset the zero point after actions! That’s very different.
I use this sparingly, for observing big new facts that I didn’t cause to be true. That doesn’t change the relative expected utilities of various actions, so long as my expected change in utility from future observations is zero.