Thanks for this thorough post. What you have described is known as the “properties-based” approach to moral status. In addition to sentience, others have argued that it’s intelligence, rationality, consciousness, and other traits that need to be present in order for an entity to be worthy of moral concern. But as I have argued in my 2020 book, Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge), this is a Sisyphean task. Philosophers don’t (and may never) agree about which of these properties is necessary. We need a different approach altogether in order to figure out what obligations we might have towards non-humans like AI. Scholars like David Gunkel, Mark Coeckelbergh, and myself have advocated for a relations-based approach, which we argue is more informed by how humans and others interact with and relate to each other. We maintain that this is a more realistic, accurate, and less controversial way of assessing moral status.
Thanks for this thorough post. What you have described is known as the “properties-based” approach to moral status. In addition to sentience, others have argued that it’s intelligence, rationality, consciousness, and other traits that need to be present in order for an entity to be worthy of moral concern. But as I have argued in my 2020 book, Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge), this is a Sisyphean task. Philosophers don’t (and may never) agree about which of these properties is necessary. We need a different approach altogether in order to figure out what obligations we might have towards non-humans like AI. Scholars like David Gunkel, Mark Coeckelbergh, and myself have advocated for a relations-based approach, which we argue is more informed by how humans and others interact with and relate to each other. We maintain that this is a more realistic, accurate, and less controversial way of assessing moral status.