Yeah but… that’s false. Which doesn’t make the rule bad, heuristics are allowed to apply only in certain domains, but a “core rule” shouldn’t fail for over 15% of the population. “Sentient things that are able to argue about harm, justice and fairness are moral agents” isn’t a weaker rule than “Violating bodily autonomy is bad”.
Yeah but… that’s false. Which doesn’t make the rule bad, heuristics are allowed to apply only in certain domains, but a “core rule” shouldn’t fail for over 15% of the population. “Sentient things that are able to argue about harm, justice and fairness are moral agents” isn’t a weaker rule than “Violating bodily autonomy is bad”.
Do you believe that the ability to understand the likely consequences of actions is a requirement for an entity to be an active moral agent?