Some may be tempted to think about the concept of “species” as if it were a fundamental concept, a Platonic form.
The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than “humans are a platonic form.” Currently you just knock down a very weak argument or two and then rush to conclusion.
EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced “argument from future potential,” which is what Jabberslythe replied to.
However, you really do only knock down weak arguments. What if we simply define categories more robustly than “platonic forms,” like philosophers have done just fine since at least Wittgenstein and as is covered on this very blog. Then there’s no point in talking about platonic forms.
For the argument from “one will be human and the next will be not” how do you deal with the unreliability of the sorites paradox as a philosophical test? Or what if we use the more general continuous model of speciesm, thus eliminating sharp lines? You don’t just have to avoid deliberately strawmanning, you have to actively steelman :)
The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue “if you’re going for that amount of arbitrariness anyway, why even bother?” The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.
I think the relevant point is the part about racism, sexism etc.. If allow moral value to depend on things other than the beings’ relevant attributes, then sure we can be speciesist. But we also can be racist, sexist, …
Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.
The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than “humans are a platonic form.” Currently you just knock down a very weak argument or two and then rush to conclusion.
EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced “argument from future potential,” which is what Jabberslythe replied to.
However, you really do only knock down weak arguments. What if we simply define categories more robustly than “platonic forms,” like philosophers have done just fine since at least Wittgenstein and as is covered on this very blog. Then there’s no point in talking about platonic forms.
For the argument from “one will be human and the next will be not” how do you deal with the unreliability of the sorites paradox as a philosophical test? Or what if we use the more general continuous model of speciesm, thus eliminating sharp lines? You don’t just have to avoid deliberately strawmanning, you have to actively steelman :)
The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue “if you’re going for that amount of arbitrariness anyway, why even bother?” The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.
Yes, value is complex. So what? The utility function is not up for grabs.
I think the relevant point is the part about racism, sexism etc.. If allow moral value to depend on things other than the beings’ relevant attributes, then sure we can be speciesist. But we also can be racist, sexist, …
Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.