I’m not arguing for any particular view, but the causal reach of possible policies/actions should determine the reach of the averaging function, right? It makes as little sense to include other possible universes in our calculation of optimal policies for this universe, as it does to try to coordinate policies between two space-time points (within the same universe) that can’t causally interact with one another.
If we’re trying to find and implement good policies (according to whatever definition of “good”), then in deciding on goodness of a policy, we should only care about the things that we can actually affect, and in proportion to the degree to which we can affect them.
That wipes out the only good thing about any form of utilitarianism: that whatever aggregated measure of utility you have, it is objective. Anyone with complete information of the universe could in principle collect the individual utilities for all inhabitants, aggregate it according to the given rules, and get the same result.
I’m not arguing for any particular view, but the causal reach of possible policies/actions should determine the reach of the averaging function, right? It makes as little sense to include other possible universes in our calculation of optimal policies for this universe, as it does to try to coordinate policies between two space-time points (within the same universe) that can’t causally interact with one another.
If we’re trying to find and implement good policies (according to whatever definition of “good”), then in deciding on goodness of a policy, we should only care about the things that we can actually affect, and in proportion to the degree to which we can affect them.
That wipes out the only good thing about any form of utilitarianism: that whatever aggregated measure of utility you have, it is objective. Anyone with complete information of the universe could in principle collect the individual utilities for all inhabitants, aggregate it according to the given rules, and get the same result.
It doesn’t make sense to use an impossibility as part of the judging criteria for the goodness of something.
An action/utility function can only ever be a function over (at most) all information available to the agent.