When is a fetus or baby capable of feeling pain (that has moral disvalue)? What about (non-human) animals?
When it is similar enough to me that I can feel empathy for its mental state. If the torture vs dust, babyeaters, and paperclip maximizers have taught me anything it’s that I value the utility of other agents according to their similarity to me. I value the utility of agents that are indistinguishable from me most highly and very dissimilar agents (or inanimate matter) the least.
When I think about quantifying that similarity I think of how many experiences and thoughts I can meaningfully share with the agent. The utility of an agent that can think and feel and act the way that I can carries much more weight in my utility function. An agent that actually does think, experience, and act like I do has even more weight. If I compare the set of my actions, thoughts, and experiences, ME, to the set of the agent’s actions, thoughts, and experiences, AGENT, I think U(me) + |AGENT intersect ME| / |ME| * U(agent) is a reasonable starting point. Comparing actions would probably be done by comparing the outcome of my decision theory to that of the agent. It might even be possible to directly compare the function U_me(world) to U_agent(world)[agent_self=me_self] (the utility function of the agent with the agent’s representation of itself replaced with a representation of me) and the more similar the resulting functions the more empathy I would have for the agent. I would also want to include a factor of my estimate of the agent’s future utility. For example, a fetus will likely have a utility function much closer to mine in 20 years, so I would have more empathy for the future state of a fetus than the future state of a babyeater.
When it is similar enough to me that I can feel empathy for its mental state. If the torture vs dust, babyeaters, and paperclip maximizers have taught me anything it’s that I value the utility of other agents according to their similarity to me. I value the utility of agents that are indistinguishable from me most highly and very dissimilar agents (or inanimate matter) the least.
When I think about quantifying that similarity I think of how many experiences and thoughts I can meaningfully share with the agent. The utility of an agent that can think and feel and act the way that I can carries much more weight in my utility function. An agent that actually does think, experience, and act like I do has even more weight. If I compare the set of my actions, thoughts, and experiences, ME, to the set of the agent’s actions, thoughts, and experiences, AGENT, I think U(me) + |AGENT intersect ME| / |ME| * U(agent) is a reasonable starting point. Comparing actions would probably be done by comparing the outcome of my decision theory to that of the agent. It might even be possible to directly compare the function U_me(world) to U_agent(world)[agent_self=me_self] (the utility function of the agent with the agent’s representation of itself replaced with a representation of me) and the more similar the resulting functions the more empathy I would have for the agent. I would also want to include a factor of my estimate of the agent’s future utility. For example, a fetus will likely have a utility function much closer to mine in 20 years, so I would have more empathy for the future state of a fetus than the future state of a babyeater.