Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
What about Aumann’s agreement theorem? Doesn’t this assume that contributions to a charity are based upon genuinely subjective considerations that are only “right” from the inside perspective of certain algorithms? Not to say that I disagree.
Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Bob comes to agree that Alice likes ballet—likes it a lot. Alice comes to agree that Bob prefers nature to art. They don’t come to agree that art is better than nature, nor that nature is better than art. Because neither is true! “Better than” is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).
...if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I’m talking about real compound agents created either by bargaining among humans or by FAI engineers.
But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.
Nicely put, very interesting.
What about Aumann’s agreement theorem? Doesn’t this assume that contributions to a charity are based upon genuinely subjective considerations that are only “right” from the inside perspective of certain algorithms? Not to say that I disagree.
Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Bob comes to agree that Alice likes ballet—likes it a lot. Alice comes to agree that Bob prefers nature to art. They don’t come to agree that art is better than nature, nor that nature is better than art. Because neither is true! “Better than” is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).
Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I’m talking about real compound agents created either by bargaining among humans or by FAI engineers.
But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.