If you already have a way to compare utilities of different moral agents you should look into that method whether differences arise or not. You could ofcourse identify the moral identity of the person on how it impacts the global utility function. However the change in utility relative to the persons values need not be one-to-one to the global function. Not handling the renumeration would be like assuming that $ and £ impact equally to wealth straight with their numerical values. However if I have the choice to make Clippy the paperclip maximiser or Roger the rubberband maximiser there probably is some amount of paperclips in utility to Clippy that would correspond to rubberbands in utility to Roger. But I have hard time imagining how I would come to know that amount.
If you already have a way to compare utilities of different moral agents you should look into that method whether differences arise or not. You could ofcourse identify the moral identity of the person on how it impacts the global utility function. However the change in utility relative to the persons values need not be one-to-one to the global function. Not handling the renumeration would be like assuming that $ and £ impact equally to wealth straight with their numerical values. However if I have the choice to make Clippy the paperclip maximiser or Roger the rubberband maximiser there probably is some amount of paperclips in utility to Clippy that would correspond to rubberbands in utility to Roger. But I have hard time imagining how I would come to know that amount.