Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.
Indeed! I am still waiting for this problem to be tackled. … At what point are we going to enjoy life? If you can’t answer that basic question, what does it mean to win?
This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions.
First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics).
Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility.
Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings).
Replace the second with the assumption that collective action is described by something like a Nash bargaining solution—that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and “fairness” constraints (to pick out the solution point on the Pareto surface).
Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions?
Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some bargaining power over GenY (GenZ’s production will pay GenY’s pensions) and GenY has bargaining power over GenX. Hence a Nash bargain is struck in which GenX acts as if it cared about GenZ’s welfare, even though it doesn’t.
But, even though GenZ’s welfare has some instrumental importance to GenX, in cannot come to have so much importance that it overwhelms GenX’s hedonism. A balance must be achieved specifically because a bargain is being struck. The instrumental value (to GenX) of the preservationist behavior exists specifically because it yields hedonistic utility to GenX (in trade).
Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
What about Aumann’s agreement theorem? Doesn’t this assume that contributions to a charity are based upon genuinely subjective considerations that are only “right” from the inside perspective of certain algorithms? Not to say that I disagree.
Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Bob comes to agree that Alice likes ballet—likes it a lot. Alice comes to agree that Bob prefers nature to art. They don’t come to agree that art is better than nature, nor that nature is better than art. Because neither is true! “Better than” is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).
...if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I’m talking about real compound agents created either by bargaining among humans or by FAI engineers.
But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.
This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions.
First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics).
Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility.
Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings).
Replace the second with the assumption that collective action is described by something like a Nash bargaining solution—that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and “fairness” constraints (to pick out the solution point on the Pareto surface).
Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions?
Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some bargaining power over GenY (GenZ’s production will pay GenY’s pensions) and GenY has bargaining power over GenX. Hence a Nash bargain is struck in which GenX acts as if it cared about GenZ’s welfare, even though it doesn’t.
But, even though GenZ’s welfare has some instrumental importance to GenX, in cannot come to have so much importance that it overwhelms GenX’s hedonism. A balance must be achieved specifically because a bargain is being struck. The instrumental value (to GenX) of the preservationist behavior exists specifically because it yields hedonistic utility to GenX (in trade).
Nicely put, very interesting.
What about Aumann’s agreement theorem? Doesn’t this assume that contributions to a charity are based upon genuinely subjective considerations that are only “right” from the inside perspective of certain algorithms? Not to say that I disagree.
Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Bob comes to agree that Alice likes ballet—likes it a lot. Alice comes to agree that Bob prefers nature to art. They don’t come to agree that art is better than nature, nor that nature is better than art. Because neither is true! “Better than” is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).
Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I’m talking about real compound agents created either by bargaining among humans or by FAI engineers.
But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.