I am in total agreement with whatever point it seems like you just made, which seems to be that normalization schemes are madness.
What you “did” there is full of type errors and treating the scales and offsets as significant and whatnot. That is not allowed, and you seemed to be claiming that it is not allowed.
I guess it must be unclear what the point of OP was, though, because I was assuming that such things were not allowed as well.
What I did in the OP was completely decouple things from the arbitrary scale and offset that the utility functions come with by saying we have a utility function U’, and U’ agrees with moral theory m on object level preferences conditional on moral theory m being correct. This gives us an unknown scale and offset for each utility function that masks out the arbitraryness of each utility function’s native scale and shift. Then that scale and shift are to be adjusted so that we get relative utilities at the end that are consistent with whatever preferences we want to have.
I hope that clarifies things? But it probably doesn’t.
What you “did” there is full of type errors and treating the scales and offsets as significant and whatnot. That is not allowed, and you seemed to be claiming that it is not allowed.
Hm. You definitely did communicate that, but I guess maybe I’m pointing out a math mistake—it seems to me that you called the problem of arbitrary offsets solved too early. Though in your example it wasn’t a problem because you only had two outcomes and one outcome was always the zero point.
As I realized later because of Alex, the upshot is that to really deal with the problem of offsets you have to (at least de facto) normalize the relative utilities, not the utilities themselves. (On pain of stupidity)
Though in your example it wasn’t a problem because you only had two outcomes and one outcome was always the zero point.
I think my procedure does not run into trouble even with three options and other offsets. I don’t feel like trying it just now, but if you want to demonstrate how it goes wrong, please do.
the upshot is that to really deal with the problem of offsets you have to (at least de facto) normalize the relative utilities, not the utilities themselves. (On pain of stupidity)
I am in total agreement with whatever point it seems like you just made, which seems to be that normalization schemes are madness.
What you “did” there is full of type errors and treating the scales and offsets as significant and whatnot. That is not allowed, and you seemed to be claiming that it is not allowed.
I guess it must be unclear what the point of OP was, though, because I was assuming that such things were not allowed as well.
What I did in the OP was completely decouple things from the arbitrary scale and offset that the utility functions come with by saying we have a utility function U’, and U’ agrees with moral theory m on object level preferences conditional on moral theory m being correct. This gives us an unknown scale and offset for each utility function that masks out the arbitraryness of each utility function’s native scale and shift. Then that scale and shift are to be adjusted so that we get relative utilities at the end that are consistent with whatever preferences we want to have.
I hope that clarifies things? But it probably doesn’t.
Hm. You definitely did communicate that, but I guess maybe I’m pointing out a math mistake—it seems to me that you called the problem of arbitrary offsets solved too early. Though in your example it wasn’t a problem because you only had two outcomes and one outcome was always the zero point.
As I realized later because of Alex, the upshot is that to really deal with the problem of offsets you have to (at least de facto) normalize the relative utilities, not the utilities themselves. (On pain of stupidity)
I think my procedure does not run into trouble even with three options and other offsets. I don’t feel like trying it just now, but if you want to demonstrate how it goes wrong, please do.
I don’t understand what you are saying here.