Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.
Indeed! I am still waiting for this problem to be tackled. At what point is an expected utility maximizer (without time preferences) going to satisfy its utility function, or is the whole purpose of expected utility maximization to maximize expected utility rather than actual utility?
People here talk about the possibility of a positive Singularity as if it was some sort of payoff. I don’t see that. If you think it is rational to donate money to the SIAI to enable it to create a galactic civilisation then it would be as rational, once you reached the post-Singularitarian paradise, to donate any computational resources to the ruling FAI to enable it to overcome the heat-death of the universe. Just as the current risks from AI comprise vast amounts of disutility, so does the heat-death of the universe.
At what point are we going to enjoy life? If you can’t answer that basic question, what does it mean to win?
Caring about the future is also problematic, because the utility of the distant future then overwhelms any considerations about the present.
Indeed! I am still waiting for this problem to be tackled. … At what point are we going to enjoy life? If you can’t answer that basic question, what does it mean to win?
This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions.
First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics).
Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility.
Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings).
Replace the second with the assumption that collective action is described by something like a Nash bargaining solution—that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and “fairness” constraints (to pick out the solution point on the Pareto surface).
Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions?
Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some bargaining power over GenY (GenZ’s production will pay GenY’s pensions) and GenY has bargaining power over GenX. Hence a Nash bargain is struck in which GenX acts as if it cared about GenZ’s welfare, even though it doesn’t.
But, even though GenZ’s welfare has some instrumental importance to GenX, in cannot come to have so much importance that it overwhelms GenX’s hedonism. A balance must be achieved specifically because a bargain is being struck. The instrumental value (to GenX) of the preservationist behavior exists specifically because it yields hedonistic utility to GenX (in trade).
Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
What about Aumann’s agreement theorem? Doesn’t this assume that contributions to a charity are based upon genuinely subjective considerations that are only “right” from the inside perspective of certain algorithms? Not to say that I disagree.
Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Bob comes to agree that Alice likes ballet—likes it a lot. Alice comes to agree that Bob prefers nature to art. They don’t come to agree that art is better than nature, nor that nature is better than art. Because neither is true! “Better than” is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).
...if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I’m talking about real compound agents created either by bargaining among humans or by FAI engineers.
But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.
Would you accept “at some currently unknown point” as an answer? Or is the issue that you think enjoyment of life will be put off infinitely? But whatever the right way to deal with possible infinities is (if such a way is needed), that policy is obviously irrational.
I have seen people argue that we discount the future since we fear dying, and therefore are devoted to instannt hedonism. But if there were no reason to fear death, we would be willing to delay gratification and look to the glorious future.
Indeed! I am still waiting for this problem to be tackled. At what point is an expected utility maximizer (without time preferences) going to satisfy its utility function, or is the whole purpose of expected utility maximization to maximize expected utility rather than actual utility?
People here talk about the possibility of a positive Singularity as if it was some sort of payoff. I don’t see that. If you think it is rational to donate money to the SIAI to enable it to create a galactic civilisation then it would be as rational, once you reached the post-Singularitarian paradise, to donate any computational resources to the ruling FAI to enable it to overcome the heat-death of the universe. Just as the current risks from AI comprise vast amounts of disutility, so does the heat-death of the universe.
At what point are we going to enjoy life? If you can’t answer that basic question, what does it mean to win?
This is the problem of balance. It is easy enough to solve, if you are willing to discard some locally cherished assumptions.
First discard the assumption that every agent ought to follow the same utility function (assumed because it seems to be required by universalist, consequentialist approaches to ethics).
Second, discard the assumption that decision making is to be done by a unified (singleton) agent which seeks to maximize expected utility.
Replace the first with the more realistic and standard assumption that we are dealing with a population of interacting egoistic agents, each with its own personal utility function. A population whose agent membership changes over time with agent births (comissionings) and deaths (decommissionings).
Replace the second with the assumption that collective action is described by something like a Nash bargaining solution—that is, it cannot be described by just a composite utility function. You need a multi-dimensional composite utility (to designate the Pareto frontier) and “fairness” constraints (to pick out the solution point on the Pareto surface).
Simple example: (to illustrate how one kind of balance is achieved). Alice prefers the arts to the outdoors; Bob is a conservationist. Left to herself, rational Alice would donate all of her charity budget to the municipal ballet company; Bob would donate to the Audubon Society. Bob and Alice marry. How do they make joint charitable contributions?
Obvious answer: They split their donation, thus achieving a balance between two interests. This would be an irrational thing for a unified rational agent to do, but it is (collectively) rational for a collective.
More pertinent example: generation X is in a society with generation Y and (expected, not-yet-born) generation Z. GenX has the power to preserve some object which will be very important to GenZ. But it has very little direct incentive to undertake the preservation, because it discounts the future. However, GenZ has some bargaining power over GenY (GenZ’s production will pay GenY’s pensions) and GenY has bargaining power over GenX. Hence a Nash bargain is struck in which GenX acts as if it cared about GenZ’s welfare, even though it doesn’t.
But, even though GenZ’s welfare has some instrumental importance to GenX, in cannot come to have so much importance that it overwhelms GenX’s hedonism. A balance must be achieved specifically because a bargain is being struck. The instrumental value (to GenX) of the preservationist behavior exists specifically because it yields hedonistic utility to GenX (in trade).
Nicely put, very interesting.
What about Aumann’s agreement theorem? Doesn’t this assume that contributions to a charity are based upon genuinely subjective considerations that are only “right” from the inside perspective of certain algorithms? Not to say that I disagree.
Also, if you assume that humans are actually compounds of elementary utility functions trying to reach some sort of equilibrium, how much of the usual heuristics, created for unified rational agents, are then effectively applicable to humans?
Bob comes to agree that Alice likes ballet—likes it a lot. Alice comes to agree that Bob prefers nature to art. They don’t come to agree that art is better than nature, nor that nature is better than art. Because neither is true! “Better than” is a three-place predicate (taking an agent id as an argument). And the two agree on the propositions Better(Alice, ballet, Audubon) and Better(Bob, Audubon, ballet).
Assume that individual humans are compounds? That is not what I am suggesting in the above comment. I’m talking about real compound agents created either by bargaining among humans or by FAI engineers.
But the notion that the well-known less-than-perfect rationality of real humans might be usefully modeled by assuming they have a bunch of competing and collaborating agents within their heads is an interesting one which has not escaped my attention. And, if pressed, I can even provide an evolutionary psychology just-so-story explaining why natural selection might prefer to place multiple agents into a single head.
Would you accept “at some currently unknown point” as an answer? Or is the issue that you think enjoyment of life will be put off infinitely? But whatever the right way to deal with possible infinities is (if such a way is needed), that policy is obviously irrational.
your risk of dying function determines the frontier between units devoted to hedonism and units devoted to continuation of experience.
Ok, but which side of the frontier is which?
I have seen people argue that we discount the future since we fear dying, and therefore are devoted to instannt hedonism. But if there were no reason to fear death, we would be willing to delay gratification and look to the glorious future.
It doesn’t seem to be much of a problem to me—because of instrumental discounting.
Enjoying life and securing the future are not mutually exclusive.
Optimizing enjoyment of life or security of the future superficially is, if resources are finite and fungible between the two goals.
Agreed. I don’t see significant fungibility here.
Downvoted for being simple disagreement.
Why not try tackling it yourself?