This points out an under-developed part of utility theory (interpersonal comparison among different-duration-or-intensity agents is the other). You don’t need infinity for it—you can pump your intuition even with fixed-duration utility comparisons. For example, is it better to suffer an hour of torture on your deathbed, or 60 years of unpleasant allergic reaction to common environmental particles?
Basically, there is no agreement on how utility adds up (or decays) over time, and whether it’s a stock or a flow. The most defensible set of assumptions is that it’s not actually a quantity that you can do math on—it’s only an ordinal measure of preferences, and only applicable at a decision-point. But that’s VERY limited for any moral theory (what one “should” do), and not even that great for decision theories (what one actually does) that want to understand multiple actions over a period of time.
I may be wrong—this seems an obvious enough problem that it should have been addressed somewhere. Maybe there’s a common assumption that I’ve just missed in how utility aggregates to an agent over it’s functioning lifetime, and what happens to that utility when it dies. Or maybe everyone is just using “utility” as their preference value for reachable or imaginable states of the universe at some specific point in time, rather than mixing stock and flow.
Making clear your assumptions about utility will dissolve the paradoxes—mostly by forcing the mechanisms you talk about in “the good”—once you can specify the limit function that’s approaching infinity, you can specify the (probabilistic) terminal utility function.
Making clear that utility is an evaluation of the state of the universe at a point in time ALSO dissolves it—the agent doesn’t actually get utility from an un-pressed button, only potential utility for the opportunity to push it later.
“is it better to suffer an hour of torture on your deathbed, or 60 years of unpleasant allergic reaction to common environmental particles?”
This only seems difficult to you because you haven’t assigned numbers to the pain of torture or unpleasant reaction. Once you do so (as any AI utility function must) it is just math. You are not really talking about procrastination at all here.
This points out an under-developed part of utility theory (interpersonal comparison among different-duration-or-intensity agents is the other). You don’t need infinity for it—you can pump your intuition even with fixed-duration utility comparisons. For example, is it better to suffer an hour of torture on your deathbed, or 60 years of unpleasant allergic reaction to common environmental particles?
Basically, there is no agreement on how utility adds up (or decays) over time, and whether it’s a stock or a flow. The most defensible set of assumptions is that it’s not actually a quantity that you can do math on—it’s only an ordinal measure of preferences, and only applicable at a decision-point. But that’s VERY limited for any moral theory (what one “should” do), and not even that great for decision theories (what one actually does) that want to understand multiple actions over a period of time.
I may be wrong—this seems an obvious enough problem that it should have been addressed somewhere. Maybe there’s a common assumption that I’ve just missed in how utility aggregates to an agent over it’s functioning lifetime, and what happens to that utility when it dies. Or maybe everyone is just using “utility” as their preference value for reachable or imaginable states of the universe at some specific point in time, rather than mixing stock and flow.
Making clear your assumptions about utility will dissolve the paradoxes—mostly by forcing the mechanisms you talk about in “the good”—once you can specify the limit function that’s approaching infinity, you can specify the (probabilistic) terminal utility function.
Making clear that utility is an evaluation of the state of the universe at a point in time ALSO dissolves it—the agent doesn’t actually get utility from an un-pressed button, only potential utility for the opportunity to push it later.
“is it better to suffer an hour of torture on your deathbed, or 60 years of unpleasant allergic reaction to common environmental particles?”
This only seems difficult to you because you haven’t assigned numbers to the pain of torture or unpleasant reaction. Once you do so (as any AI utility function must) it is just math. You are not really talking about procrastination at all here.