this is the sort of thing where i’d be insanely curious as to the downvotes. If this approach to combining utility function is so flawed as to not be worth considering that is highly useful information.
Consider a problem with scarce resource allocation in a small community with an appointed decision-taker: utility function that intuitively should be used has utility functions of different members of a community taken with some commensurable coefficients.
So you would need some additional structure to meaningfully combine utility functions, but, depending on a scenario, there often are solutions that seem natural in their domain. Of course, if we extend them beyond their natural domain we get all the weirdness explored by scenarios of Dr. Evil running a trillion simulations of himself to foil CEV-performing Friendly AI, and, arguably, SIA vs SSA paradoxes too.
I was being semantically imprecise when I said average. I should have said searching for conditions that produce the highest additive utility. This seems different from pareto improvements when we’re talking about two agents agreeing to use their optimizing power together on some exterior conditions rather than simply the division of some finite resource.
this is the sort of thing where i’d be insanely curious as to the downvotes. If this approach to combining utility function is so flawed as to not be worth considering that is highly useful information.
Utility functions are only well-defined up to scaling, thus taking the average of two utility functions isn’t mathematically meaningful.
Consider a problem with scarce resource allocation in a small community with an appointed decision-taker: utility function that intuitively should be used has utility functions of different members of a community taken with some commensurable coefficients.
So you would need some additional structure to meaningfully combine utility functions, but, depending on a scenario, there often are solutions that seem natural in their domain. Of course, if we extend them beyond their natural domain we get all the weirdness explored by scenarios of Dr. Evil running a trillion simulations of himself to foil CEV-performing Friendly AI, and, arguably, SIA vs SSA paradoxes too.
I was being semantically imprecise when I said average. I should have said searching for conditions that produce the highest additive utility. This seems different from pareto improvements when we’re talking about two agents agreeing to use their optimizing power together on some exterior conditions rather than simply the division of some finite resource.
From a mathematical (or any practical) point of view, this distinction is completely irrelevant.
can you point me to anything relevant if you don’t want to make a longer response? I feel I must be missing something basic.