My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you’d need to change your utility function, which is something we’d normally not want to do.
Of course, my approach doesn’t solve infinite ethics in general; if you’re adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.
My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you’d need to change your utility function, which is something we’d normally not want to do.
Of course, my approach doesn’t solve infinite ethics in general; if you’re adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.