I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That’s not quite true, but it’s more true than the idea of a human as a unitary agent.
I’m mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn’t go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.
Reframing things to myself, in ways that don’t change the truth value but do change the emphasis, is very useful. Other parts of me don’t necessarily speak logic, but they do speak metaphor.
I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
Thanks for confirming. For what it’s worth, I can envision your experience being a somewhat frequent one (and I think it’s probably actually more common among rationalists than the average Jo). It’s somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There’s no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it’s easy.
Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.
I’ll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it’s definitely led to some unusual psychology—I’m planning on doing a post on it one of these days.
I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That’s not quite true, but it’s more true than the idea of a human as a unitary agent.
I’m mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn’t go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly.
Reframing things to myself, in ways that don’t change the truth value but do change the emphasis, is very useful. Other parts of me don’t necessarily speak logic, but they do speak metaphor.
I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
Thanks for confirming. For what it’s worth, I can envision your experience being a somewhat frequent one (and I think it’s probably actually more common among rationalists than the average Jo). It’s somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There’s no specific ethical sub-agent and specific selfish sub-agent, just a whole vaguely selfish person with accurate framing and a willingness to allocate resources when it’s easy.
Maybe these people have not internalized the implications of a low zero-point world in the same way we have but it generally pushes me away from a sub-agent framing with respect to the average person.
I’ll also agree with your implication that my experience is relatively uncommon. I do far more internal double cruxes than the norm and it’s definitely led to some unusual psychology—I’m planning on doing a post on it one of these days.