On your lexicographic utility function, I think it’s pretty ad hoc that it depends on explicit upper bounds on the quantities, which will depend on the specifics of our universe, but you can manage without them and allow unbounded quantities (and countably infinitely many, but I would be careful going further), unfortunately at the cost of additivity. I wrote about this here.
Thanks. I agree that the lexicographic utility we defined is ad-hoc, and agree that it is not unique. Of course there are infinitely many utility functions that represent the same preferences, since they are defined only as an affine transformation class—and once we are representing “infinite” preferences in a finite universe, the class is slightly larger, since we can place as (finitely) large a space in between lexically different goods as we want and still represent the same preferences. I’m unsure if your formulation has an difference in terms of ability to represent preferences.
My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you’d need to change your utility function, which is something we’d normally not want to do.
Of course, my approach doesn’t solve infinite ethics in general; if you’re adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.
Cool!
On your lexicographic utility function, I think it’s pretty ad hoc that it depends on explicit upper bounds on the quantities, which will depend on the specifics of our universe, but you can manage without them and allow unbounded quantities (and countably infinitely many, but I would be careful going further), unfortunately at the cost of additivity. I wrote about this here.
Thanks. I agree that the lexicographic utility we defined is ad-hoc, and agree that it is not unique. Of course there are infinitely many utility functions that represent the same preferences, since they are defined only as an affine transformation class—and once we are representing “infinite” preferences in a finite universe, the class is slightly larger, since we can place as (finitely) large a space in between lexically different goods as we want and still represent the same preferences. I’m unsure if your formulation has an difference in terms of ability to represent preferences.
My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you’d need to change your utility function, which is something we’d normally not want to do.
Of course, my approach doesn’t solve infinite ethics in general; if you’re adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.