The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue “if you’re going for that amount of arbitrariness anyway, why even bother?” The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.
The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue “if you’re going for that amount of arbitrariness anyway, why even bother?” The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.
Yes, value is complex. So what? The utility function is not up for grabs.