I don’t know what you mean by “average across time”
I mean calculating the average utility of the whole timeline, not of particular discrete moments in time.
An example. Let’s say we’re in the year 2020 and considering whether it’s cool to murder 7 billion people in order to let a person-of-maximum-utility lead an optimal life from 2021 onwards. By utility in this case I mean “satisfaction of preferences” (preference utilitarianism) rather than “happiness”.
If we do so, a calculation that treats 2020 and 2021 as separate “worlds” might say “If 7 billion people are killed, 2021 will have a much higher average utility than 2020, so we should do it in order to transit to the world of 2021″
But I’d calculate it differently: If 7 billion people are killed between 2020 and 2021, the people of 2020 have far less utility because they very strongly prefer to not be killed, and their killings would therefore grossly reduce the satisfaction of their preferences. Therefore the average utility in the timeline as a whole would be vastly reduced by their murders.
Anyway, utilitarianism is a form of consequentialism in that it assigns moral preferences to world states rather than transitions
One just needs treat ‘world-states’ 4-dimensionally, as ‘timeline-states’...
I mean calculating the average utility of the whole timeline, not of particular discrete moments in time.
An example. Let’s say we’re in the year 2020 and considering whether it’s cool to murder 7 billion people in order to let a person-of-maximum-utility lead an optimal life from 2021 onwards. By utility in this case I mean “satisfaction of preferences” (preference utilitarianism) rather than “happiness”.
If we do so, a calculation that treats 2020 and 2021 as separate “worlds” might say “If 7 billion people are killed, 2021 will have a much higher average utility than 2020, so we should do it in order to transit to the world of 2021″
But I’d calculate it differently: If 7 billion people are killed between 2020 and 2021, the people of 2020 have far less utility because they very strongly prefer to not be killed, and their killings would therefore grossly reduce the satisfaction of their preferences. Therefore the average utility in the timeline as a whole would be vastly reduced by their murders.
One just needs treat ‘world-states’ 4-dimensionally, as ‘timeline-states’...
If you could genetically modify future humans to make them indifferent to being killed, would you do that, since it would facilitate the mass murder?