In general, the solution is not to add a large existence term, but to make sure that the utility function as stated matches your actual preference for whether or not someone should exist.
What do you mean by “actual preference”? I can’t think of many interpretations of that comment that don’t implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that’ll almost certainly end up being inconsistent outside a fairly narrow domain.
Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I’d expect them to choke on problems that don’t deal with either one in a conventional way, and I don’t trust intuition pumps that rely on those problems.
I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.
Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.
What do you mean by “actual preference”? I can’t think of many interpretations of that comment that don’t implicitly define a utility function (making the statement tautological): even evaluating everything in terms of how well it matches our ethical intuitions constitutes a utility function, albeit one that’ll almost certainly end up being inconsistent outside a fairly narrow domain.
Our intuitions evolved to make decisions about existing entities in a world dense with externalities. I’d expect them to choke on problems that don’t deal with either one in a conventional way, and I don’t trust intuition pumps that rely on those problems.
I do mean a utility function, but one not necessarily known to the agent. If one values all good lives but wants to consider average utilitarianism, they could make average utilitarianism less different from their real utility function by adding existence terms. However, if their real utility function says that a life should exist, ceteris paribus, if it meets a certain standard of value, regardless of the value of other lives, than they are not really an average utilitarian; average utilitarianism is incompatible with that statement.
That seems logically valid, but it doesn’t tell us very much about those utility functions that we don’t already know.
Well we must already know something about our utility functions or it is meaningless to say that we desire for them to be maximized. I feel considerably more confidant that I want people to live good lives than I feel toward any of the arguments for average utilitarianism that I have seen.