When one species learns to cooperate with others of its own kind, the better to exploit everything outside that particular agreement, this does not seem to me even metaphorically comparable to some sort of universal benevolent force, but just another thing that happens in our brutish, amoral world.
I suspect that both of those may be running off the same basic algorithm, with there just being other components dictating what that algorithm gets applied to, and by default preventing it from getting applied too broadly.
But I could be wrong about that. And even if it was the same basic algorithm, running it in “limited vs. universal” mode does cause some significant qualitative differences, even if the difference was arguably just quantitative. So I do think that a more precise view would be to consider these as different-but-related forces in the same pantheon: one force just banding together with your ingroup, and one force for some more universal love.
Or you could view it in the kind of a way as it was viewed in The Goddess of Everything Else: going from a purely solitary existence, to banding together, to using that exploit outgroups, to then expanding the moral circle to outgroups as well, represents steps in the dance of force for harmony and the force for conflict. (Of course, in reality, these steps are not separated in time, but rather are constantly intertwined with each other.) The banding together within the same species bears the signature of the force for cooperation and self-sacrifice, but also that of the force for conflict and destruction… and then again that of the force for cooperation, as it can be turned into more universal caring.
The problem with this is that there is no game-theoretical reason to expand the circle to, say, non-human animals. We might do it, and I hope we do, but it wouldn’t benefit us practically. Animals have no negotiating power, so their treatment is entirely up to the arbitrary preferences of whatever group of humans ends up in charge, and so far that hasn’t worked out so well (for the animals anyway, the social contract chugs along just fine).
The ingroup preference force is backed by game theory, the expansion of the ingroup to other groups which have some bargaining power is as well, but the “universal love” force, if there is such a thing, is not. There is no force of game theory that would stop us from keeping factory farms going even post-singularity, or doing something equivalent with different powerless beings we create for that purpose.
Sorry if I came off confrontational, I just mean to say that the forces you mention which are backed by deep mathematical laws, aren’t fully aligned with “the good”, and aren’t a proof that things will work out well in the end. If you agree, good, I just worry with posts like these that people will latch onto “Elua” or something similar as a type of unjustified optimism.
No worries! Yeah, I agree with that. These paragraphs were actually trying to explicitly say that things may very well not work out in the end, but maybe that wasn’t clear enough:
Love doesn’t always win. There are situations where loyalty, cooperation, and love win, and there are situations where disloyalty, selfishness, and hatred win. If that wasn’t the case, humans wouldn’t be so clearly capable of both.
It’s possible for people and cultures to settle into stable equilibria where trust and happiness dominate and become increasingly beneficial for everyone, but also for them to settle into stable equilibria where mistrust and misery dominate, or anything in between.
When one species learns to cooperate with others of its own kind, the better to exploit everything outside that particular agreement, this does not seem to me even metaphorically comparable to some sort of universal benevolent force, but just another thing that happens in our brutish, amoral world.
That’s a fair point.
I suspect that both of those may be running off the same basic algorithm, with there just being other components dictating what that algorithm gets applied to, and by default preventing it from getting applied too broadly.
But I could be wrong about that. And even if it was the same basic algorithm, running it in “limited vs. universal” mode does cause some significant qualitative differences, even if the difference was arguably just quantitative. So I do think that a more precise view would be to consider these as different-but-related forces in the same pantheon: one force just banding together with your ingroup, and one force for some more universal love.
Or you could view it in the kind of a way as it was viewed in The Goddess of Everything Else: going from a purely solitary existence, to banding together, to using that exploit outgroups, to then expanding the moral circle to outgroups as well, represents steps in the dance of force for harmony and the force for conflict. (Of course, in reality, these steps are not separated in time, but rather are constantly intertwined with each other.) The banding together within the same species bears the signature of the force for cooperation and self-sacrifice, but also that of the force for conflict and destruction… and then again that of the force for cooperation, as it can be turned into more universal caring.
The problem with this is that there is no game-theoretical reason to expand the circle to, say, non-human animals. We might do it, and I hope we do, but it wouldn’t benefit us practically. Animals have no negotiating power, so their treatment is entirely up to the arbitrary preferences of whatever group of humans ends up in charge, and so far that hasn’t worked out so well (for the animals anyway, the social contract chugs along just fine).
The ingroup preference force is backed by game theory, the expansion of the ingroup to other groups which have some bargaining power is as well, but the “universal love” force, if there is such a thing, is not. There is no force of game theory that would stop us from keeping factory farms going even post-singularity, or doing something equivalent with different powerless beings we create for that purpose.
I think I agree with this, do you mean it as disagreement to something I said or just an observation?
Sorry if I came off confrontational, I just mean to say that the forces you mention which are backed by deep mathematical laws, aren’t fully aligned with “the good”, and aren’t a proof that things will work out well in the end. If you agree, good, I just worry with posts like these that people will latch onto “Elua” or something similar as a type of unjustified optimism.
No worries! Yeah, I agree with that. These paragraphs were actually trying to explicitly say that things may very well not work out in the end, but maybe that wasn’t clear enough: