The notion I’m trying to express is not an entirely altruistic EV, or even a deliberately altruistic EV. Simply, this person has friends and family and such, and thus has a partially social EV; this person is at least altruistic towards close associates when it costs them nothing.
My claim, then, is that if we denote the n = number of hops from any one person to any other in the social graph of such agents:
lim_{n->0} Social Component of Personal EV = species-wide CEV
Now, there may be special cases, such as people who don’t give a shit about anyone but themselves, but the idea is that as social connectedness grows, benefitting only myself and my loved ones becomes more and more expensive and unwieldly (for instance, income inequality and guard labor already have sizable, well-studied economic costs, and that’s before we’re talking about potential improvements to the human condition from AI!) compared to just doing things that are good for everyone without regard to people’s connection to myself (they’re bound to connect through a mutual friend or relative with some low degree, after all) or social status (because again, status enforcement is expensive).
So while the total degree to which I care about other people is limited (Social Component of Personal EV ⇐ Personal EV), eventually that component should approximate the CEV of everyone reachable from me in the social graph.
The question, then, becomes whether that Social Component of my Personal EV is large enough to overwhelm some of my own personal preferences (I participate in a broader society voluntarily) or whether my personal values overwhelm my consideration of other people’s feelings (I conquer the world and crush you beneath my feet).
Sorry, I don’t think I’m being clear.
The notion I’m trying to express is not an entirely altruistic EV, or even a deliberately altruistic EV. Simply, this person has friends and family and such, and thus has a partially social EV; this person is at least altruistic towards close associates when it costs them nothing.
My claim, then, is that if we denote the n = number of hops from any one person to any other in the social graph of such agents:
lim_{n->0} Social Component of Personal EV = species-wide CEV
Now, there may be special cases, such as people who don’t give a shit about anyone but themselves, but the idea is that as social connectedness grows, benefitting only myself and my loved ones becomes more and more expensive and unwieldly (for instance, income inequality and guard labor already have sizable, well-studied economic costs, and that’s before we’re talking about potential improvements to the human condition from AI!) compared to just doing things that are good for everyone without regard to people’s connection to myself (they’re bound to connect through a mutual friend or relative with some low degree, after all) or social status (because again, status enforcement is expensive).
So while the total degree to which I care about other people is limited (Social Component of Personal EV ⇐ Personal EV), eventually that component should approximate the CEV of everyone reachable from me in the social graph.
The question, then, becomes whether that Social Component of my Personal EV is large enough to overwhelm some of my own personal preferences (I participate in a broader society voluntarily) or whether my personal values overwhelm my consideration of other people’s feelings (I conquer the world and crush you beneath my feet).