Huh… I think the crux of our differences here is that I don’t view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior—I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn’t really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn’t modify the ethical framework/ultimate actions really perturbs me.
Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just “positive reinforcement vs punishment” (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.
I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That’s not quite true, but it’s more true than the idea of a human as a unitary agent.
I’m mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn’t go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.
Reframing things to myself, in ways that don’t change the truth value but do change the emphasis, is very useful. Other parts of me don’t necessarily speak logic, but they do speak metaphor.
I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
Thanks for confirming. For what it’s worth, I can envision your experience being a somewhat frequent one (and I think it’s probably actually more common among rationalists than the average Jo). It’s somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There’s no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it’s easy.
Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.
I’ll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it’s definitely led to some unusual psychology—I’m planning on doing a post on it one of these days.
Huh… I think the crux of our differences here is that I don’t view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior—I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn’t really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn’t modify the ethical framework/ultimate actions really perturbs me.
Can you confirm that the above interpretation is appropriate? I think its less-clearly-true than just “positive reinforcement vs punishment” (which I agree with) and I want to be careful interpreting it in this way. If I do, it will significantly update my world-model/strategy.
I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That’s not quite true, but it’s more true than the idea of a human as a unitary agent.
I’m mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn’t go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.
Reframing things to myself, in ways that don’t change the truth value but do change the emphasis, is very useful. Other parts of me don’t necessarily speak logic, but they do speak metaphor.
I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
Thanks for confirming. For what it’s worth, I can envision your experience being a somewhat frequent one (and I think it’s probably actually more common among rationalists than the average Jo). It’s somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There’s no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it’s easy.
Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.
I’ll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it’s definitely led to some unusual psychology—I’m planning on doing a post on it one of these days.