The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there’s no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There’s nothing wrong with using relative measurements, and using absolute measurements doesn’t resolve the problem. (It hides from the problem, but that’s not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn’t reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn’t reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.
The naive form of the argument is the same between the classic and moral-uncertainty two-envelopes problems, but yes, while there is a resolution to the classic version based on taking expected values of absolute rather than relative measurements, there’s no similar resolution for the moral-uncertainty version, where there are no unique absolute measurements.
There’s nothing wrong with using relative measurements, and using absolute measurements doesn’t resolve the problem. (It hides from the problem, but that’s not the same thing.)
The actual resolution is explained in the wiki article better than I could.
I agree that the naive version of the elephants problem is isomorphic to the envelopes problem. But the envelopes problem doesn’t reveal an actual difficulty with choosing between two envelopes, and the naive elephants problem as described doesn’t reveal an actual difficulty with choosing between humans and elephants. They just reveal a particular math error that humans are bad at noticing.