(I haven’t read or thought deeply about details of utilitarianism, this might be a 101 level question.)
Does it work to have a variant where, whether one action is better than another depends on “the average utility of your future light cone conditional on each action”?
Then it would be likely bad to have a kid who’d have lower utility than the average human alive today, because (I would guess) that’s likely to lower the average utility of your future light cone.
Sure it would work. But why? Why on earth would you say that? It’s just a completely random definition of a utility function.
The universe had no concept of ethics. Ethics are purely in the mind.
The purpose of utilitarianism is to try and formalise some of our intuitions about ethics so that we can act consistently. If the formalised utility function didn’t match our intuitions, why bother?
(I haven’t read or thought deeply about details of utilitarianism, this might be a 101 level question.)
Does it work to have a variant where, whether one action is better than another depends on “the average utility of your future light cone conditional on each action”?
Then it would be likely bad to have a kid who’d have lower utility than the average human alive today, because (I would guess) that’s likely to lower the average utility of your future light cone.
Sure it would work. But why? Why on earth would you say that? It’s just a completely random definition of a utility function.
The universe had no concept of ethics. Ethics are purely in the mind.
The purpose of utilitarianism is to try and formalise some of our intuitions about ethics so that we can act consistently. If the formalised utility function didn’t match our intuitions, why bother?
Fwiw it doesn’t feel random to me. It feels like what I get if I think (briefly, shallowly) about, like...
What are the intuitions that seem to lead to someone advocating average utilitarianism?
Okay, and average utilitarianism as naively described clearly doesn’t match those for the reasons you describe.
This adjustment feels like maybe it gets closer to matching those intuitions?
(But also I think a formalized utility function is never going to match all our intuitions and that’s not necessarily a problem.)