It seems like a different definition of utility (“the sum of happiness minus suffering for all conscious beings”) than usual was introduced somewhere. Concept of utility doesn’t really restrict what it values; it includes such things as paperclip maximizers, for instance.
As well, agents can maximize not expected utility but minimal one over all cases, selecting guaranteed nice world over hell/heaven lottery.
You’re using a different word “utility” than I am here. There are at least three definitions of that word. I’m using the one from hedonic utilitarianism (since that’s what most EAs identify as), not the one from decision theory (e..g., “expected utility maximization” as a decision theory), and not the one from economics (rational agents maximizing “utility”).
It seems like a different definition of utility (“the sum of happiness minus suffering for all conscious beings”) than usual was introduced somewhere. Concept of utility doesn’t really restrict what it values; it includes such things as paperclip maximizers, for instance.
As well, agents can maximize not expected utility but minimal one over all cases, selecting guaranteed nice world over hell/heaven lottery.
You’re using a different word “utility” than I am here. There are at least three definitions of that word. I’m using the one from hedonic utilitarianism (since that’s what most EAs identify as), not the one from decision theory (e..g., “expected utility maximization” as a decision theory), and not the one from economics (rational agents maximizing “utility”).