This is how I imagine the general form of utilitarianism’s utility function: assign all states of being a quantity of fun, which can be positive or negative. Multiply all fun-values times the amount of time over which they are being experienced times the number of beings experiencing them.
This utility function could be optimized by bringing people into the universe when the total fun-impact that they would have on the universe (including themselves) is higher than would be the total fun-impact of not bringing them into the universe.
However, this utility function only cares about people’s preferences if they either already exist or may be brought into existence. It could act on data which suggested that sentiences with certain preferences were more likely to exist than sentiences with other preferences, but I don’t know if we have any strong data in that area. (I would suppose not, but I don’t think I’ve thought about it long enough to make a positive claim.)
As to why I’m discussing utilitarianism: all utility functions which assign utility to the fulfillment of others’ preferences is a form of utilitarianism -- if you object to valuing all sentiences equally, then add a multiplier in front of each sentience indicating how much you value fun it has over that which other sentiences have. Either way, I think that the above conclusions still apply.
This is how I imagine the general form of utilitarianism’s utility function: assign all states of being a quantity of fun, which can be positive or negative. Multiply all fun-values times the amount of time over which they are being experienced times the number of beings experiencing them.
This utility function could be optimized by bringing people into the universe when the total fun-impact that they would have on the universe (including themselves) is higher than would be the total fun-impact of not bringing them into the universe.
However, this utility function only cares about people’s preferences if they either already exist or may be brought into existence. It could act on data which suggested that sentiences with certain preferences were more likely to exist than sentiences with other preferences, but I don’t know if we have any strong data in that area. (I would suppose not, but I don’t think I’ve thought about it long enough to make a positive claim.)
As to why I’m discussing utilitarianism: all utility functions which assign utility to the fulfillment of others’ preferences is a form of utilitarianism -- if you object to valuing all sentiences equally, then add a multiplier in front of each sentience indicating how much you value fun it has over that which other sentiences have. Either way, I think that the above conclusions still apply.