You don’t need a bounded utility function to avoid this problem. It merely has to have the property that the utility of a given configuration of the world doesn’t grow faster than the length of a minimal description of that function. (Where “minimal” is relative to whatever sort of bounded rationality you’re using.)
It actually seems quite plausible to me that our intuitive utility-assignments satisfy something like this constraint (e.g., killing 3^^^^^3 puppies doesn’t feel much worse than killing 3^^^^3 puppies), though that might not matter much if you think (as I do, and I expect Eliezer does) that our intuitive utility-assignments often need a lot of adjustment before they become things a really rational being could sign up to.
You don’t need a bounded utility function to avoid this problem. It merely has to have the property that the utility of a given configuration of the world doesn’t grow faster than the length of a minimal description of that function. (Where “minimal” is relative to whatever sort of bounded rationality you’re using.)
It actually seems quite plausible to me that our intuitive utility-assignments satisfy something like this constraint (e.g., killing 3^^^^^3 puppies doesn’t feel much worse than killing 3^^^^3 puppies), though that might not matter much if you think (as I do, and I expect Eliezer does) that our intuitive utility-assignments often need a lot of adjustment before they become things a really rational being could sign up to.