I don’t think your argument passes the ideological Turing test. I would have preferred for you to, at least, directly address the challenges in this post.
The post you link to makes five points.
1) and 2) don’t concern the arguments I’m making because I left out empirical issues on purpose.
3) is also an empirical issue that can be applied to some humans as well.
4) is the most interesting one.
Something About Sapience Is What Makes Suffering Bad
I sort of addressed this here. I must say I’m not very familiar with this position so I might be bad at steelmanning it, but so far I simply don’t see why intelligence has anything to do with the badness of suffering.
As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren’t sentient, I genuinely wouldn’t argue about giving them moral consideration.
The post you link to makes five points.
1) and 2) don’t concern the arguments I’m making because I left out empirical issues on purpose.
3) is also an empirical issue that can be applied to some humans as well.
4) is the most interesting one.
I sort of addressed this here. I must say I’m not very familiar with this position so I might be bad at steelmanning it, but so far I simply don’t see why intelligence has anything to do with the badness of suffering.
As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren’t sentient, I genuinely wouldn’t argue about giving them moral consideration.