You can say anything, but Graham’s number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham’s number, enough air pressure to kill you would have negligible disutility.
It occurs to me that this sounds a lot like the problem with the linear scaling used by “utilitarianism”: “paradox” of the heap or not, things can have very different effects when they come in large numbers or very small numbers. You really should not have a utility function that rates “disutility of an air molecule slamming into your eye” and then scales up linearly with the number of molecules, precisely because one molecule has no measurable effect on you, while an immense number (eg: a tornado, for instance) can and will kill you.
When you assume linear scaling of utility as an axiom (that “utilons” are an in-model real scalar), you are actually throwing out the causal interactions involving the chosen variable (eg: real-world object embodying a “utilon”) that scale in nonlinear ways. The axiom is actually telling you to ignore part of the way reality works just to get a simpler “normative” model.
So in a typical, intuitive case, we assume that “maximizing happiness” means some actually-existing agent actually experiences the additional happiness. But when you instead have a Utilitarian AI that adds happiness by adding not-quite-clinically-depressed people, the map of “utility maximizing” as “making the individual experiences more enjoyable” has ceased to match the territory of “increase the number of individuals until the carrying capacity of the environment is reached”. A nonlinear scaling effect happened—you created so many people that they can’t be individually very happy—but the “normativity” of the linear-utilons axiom told your agent to ignore it.
I think a strong criterion for a True Ethical System should be precisely that it doesn’t “force” you to ignore the causal joints of reality.
Hey, a thought occurred. I was random-browsing The Intuitions Behind Utiliarianism and saw the following:
It occurs to me that this sounds a lot like the problem with the linear scaling used by “utilitarianism”: “paradox” of the heap or not, things can have very different effects when they come in large numbers or very small numbers. You really should not have a utility function that rates “disutility of an air molecule slamming into your eye” and then scales up linearly with the number of molecules, precisely because one molecule has no measurable effect on you, while an immense number (eg: a tornado, for instance) can and will kill you.
When you assume linear scaling of utility as an axiom (that “utilons” are an in-model real scalar), you are actually throwing out the causal interactions involving the chosen variable (eg: real-world object embodying a “utilon”) that scale in nonlinear ways. The axiom is actually telling you to ignore part of the way reality works just to get a simpler “normative” model.
So in a typical, intuitive case, we assume that “maximizing happiness” means some actually-existing agent actually experiences the additional happiness. But when you instead have a Utilitarian AI that adds happiness by adding not-quite-clinically-depressed people, the map of “utility maximizing” as “making the individual experiences more enjoyable” has ceased to match the territory of “increase the number of individuals until the carrying capacity of the environment is reached”. A nonlinear scaling effect happened—you created so many people that they can’t be individually very happy—but the “normativity” of the linear-utilons axiom told your agent to ignore it.
I think a strong criterion for a True Ethical System should be precisely that it doesn’t “force” you to ignore the causal joints of reality.