Why stop at two? What’s that Isaac Asimov quote about one being possible, and infinity being possible, but two being ridiculous?
If you just have one dimension, then you’re doing utility comparisons—not necessarily for interpersonal reasons, but just so that you can rank actions to decide which action to take! If you’re going to have more, you can have as many as you like, because then they’re just going to be dimensions of variation between things humans might prefer.
I agree! I think you should use as many dimensions as necessary to define a subjective state, no more, no less. You don’t want to leave any important ethical intuitions on the cutting room floor. You don’t want to compare and “rank” apples and oranges unless it’s appropriate.
If you’re comparing “dying of malaria” versus “not dying of malaria”, I have no problem with ranking those outcomes in a simple preference ordering, to which it would appropriate to employ QALYs or something.
Crushing the full symphonic nuance of human experience onto a number line, without capturing every micro-wrinkle of that landscape, is that pathway to the Bad Ending for humanity. A true success at CEV means that the FAI has successfully understood every possible dimension of human subjectivity and can reliably answer questions of preference between “headache+friend” and “no headache+no friend”, and even more ambiguous questions.
I frankly didn’t put a ton of thought into what my two axes were. I was merely trying to capture the intuition that whatever Suffering is and whatever “Positively Valenced Emotion” is, they aren’t merely opposites of each other.
Why stop at two? What’s that Isaac Asimov quote about one being possible, and infinity being possible, but two being ridiculous?
If you just have one dimension, then you’re doing utility comparisons—not necessarily for interpersonal reasons, but just so that you can rank actions to decide which action to take! If you’re going to have more, you can have as many as you like, because then they’re just going to be dimensions of variation between things humans might prefer.
I agree! I think you should use as many dimensions as necessary to define a subjective state, no more, no less. You don’t want to leave any important ethical intuitions on the cutting room floor. You don’t want to compare and “rank” apples and oranges unless it’s appropriate.
If you’re comparing “dying of malaria” versus “not dying of malaria”, I have no problem with ranking those outcomes in a simple preference ordering, to which it would appropriate to employ QALYs or something.
Crushing the full symphonic nuance of human experience onto a number line, without capturing every micro-wrinkle of that landscape, is that pathway to the Bad Ending for humanity. A true success at CEV means that the FAI has successfully understood every possible dimension of human subjectivity and can reliably answer questions of preference between “headache+friend” and “no headache+no friend”, and even more ambiguous questions.
I frankly didn’t put a ton of thought into what my two axes were. I was merely trying to capture the intuition that whatever Suffering is and whatever “Positively Valenced Emotion” is, they aren’t merely opposites of each other.