But if you can answer questions like “how much money would I pay to save a human life under the first hypothesis” and “under the second hypothesis”, which seem like questions you should be able to answer, then the conversion stops being a problem.
You are just normalizing on the dollar. You could ask “how many chickens would I kill to save a human life” instead, and you would normalize on a chicken.
I’m normalizing on my effort—eventually, on my pleasure and pain as a common currency. That’s not quite the same as normalizing on chickens, because the number of dead chickens in the world isn’t directly qualia.
But if you can answer questions like “how much money would I pay to save a human life under the first hypothesis” and “under the second hypothesis”, which seem like questions you should be able to answer, then the conversion stops being a problem.
You are just normalizing on the dollar. You could ask “how many chickens would I kill to save a human life” instead, and you would normalize on a chicken.
I’m normalizing on my effort—eventually, on my pleasure and pain as a common currency. That’s not quite the same as normalizing on chickens, because the number of dead chickens in the world isn’t directly qualia.
The min-max normalisation of https://www.lesswrong.com/posts/hBJCMWELaW6MxinYW/intertheoretic-utility-comparison can be seen as the formalisation of normalising on effort (it normalises on what you could achieve if you dedicated yourself entirely to one goal).