Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don’t include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don’t include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.
The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.
Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can’t and won’t act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can’t and won’t act that way. If you’re constructing an ethics for humans follow, you have to start by figuring out humans.
It’s not until after you’ve figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.
Well, if humans can’t and won’t act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.
If we did model ethics after particular types of agent, here’s what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can’t morally compare types A and B.
But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than “objectively better”, we could be more clear by saying “more in line with our morals” or some such. It’s not as if our morals came from nowhere, after all.
Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we’d want them to have good lives or not (and I think it would see we do).
But chickens don’t (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we’d be left with some important human values, but also a lot of chicken feed.
Thanks, Benito. Do we know that we shouldn’t have a lot of chicken feed? My point in asking this is just that we’re baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers—I want to bake in my answers—but I’m just highlighting that it’s not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the “brain reward pathways” between humans and chickens aren’t that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.
Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don’t include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don’t include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.
The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.
Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can’t and won’t act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can’t and won’t act that way. If you’re constructing an ethics for humans follow, you have to start by figuring out humans.
It’s not until after you’ve figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.
Well, if humans can’t and won’t act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.
If we did model ethics after particular types of agent, here’s what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can’t morally compare types A and B.
But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than “objectively better”, we could be more clear by saying “more in line with our morals” or some such. It’s not as if our morals came from nowhere, after all.
See also: “The Bedrock of Morality: Arbitrary?”
Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we’d want them to have good lives or not (and I think it would see we do).
But chickens don’t (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we’d be left with some important human values, but also a lot of chicken feed.
Thanks, Benito. Do we know that we shouldn’t have a lot of chicken feed? My point in asking this is just that we’re baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers—I want to bake in my answers—but I’m just highlighting that it’s not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the “brain reward pathways” between humans and chickens aren’t that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.