Well, I admit to being one of the approximately seven billion humans who can’t prove their utility functions from first principles.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
But I think there’s a very convincing argument that consciousness is in fact what we’re actually looking for and naturally taking into account.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
To some degree I bite the bullet: if there was some entity whose nervous system was so acute that causing it the slightest amount of pain would correspond to 3^^^3 years of torture for a human being, I’d place high priority on keeping that entity happy.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.
But you seem to think (and correct me if I’m misinterpreting) that it would be better if we could. I’m not so sure. And further you seem to think that given that we can’t, it’s still better to override our felt/intrinsic preferences that are hard to fully justify with unnatural preferences that have the sole advantage of being easier to express in simple sentences.
Now I’m not sure you’re actually claiming this but with the pig/dog comparison you seem to be acknowledging that many people value dogs more than pigs (I’m not clear if you have this instinctive preference yourself or not) but that based on some abstract concept of levels of consciousness (that is itself subjective given our current knowledge) we should override our instincts and judge them as of equal value. I’m saying “screw the abstract theory, I value dogs over pigs and that’s sufficient moral justification for me”. I can give you rationalizations for my preference—the idea that dogs have been bred to live with humans for example—but ultimately I don’t think the rationalization is required for moral justification.
If this is true, then we should prefer our natural judgements (we value cute baby seals highly, that’s fine—what we’re really valuing is consciousness, not the fact that they share facial features with human babies and so trigger protective instincts). You can’t have it both ways—either we prefer dogs to pigs because they really are ‘more conscious’ or we should fight our instincts and value them equally because our instincts mislead us. I’d agree that what you call ‘consciousness’ or ‘awareness’ is a factor but I don’t think it’s the most important feature influencing our judgements. And I don’t see why it should be.
And it’s exactly this sort of thing that makes me inclined to reject utilitarian ethics. If following utilitarian ethics leads to morally objectionable outcomes I see no good reason to think the utilitarian position is right.