Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us, which could be objectively determined by a thorough and competent enough analysis of humans. The edges of the bell curve on various factors might have certain variances. There might be a multi-modal distribution of fit (bimodal on men and women, for example), too. But, basically, one objective ethics for humans.
This ethics would clearly be unsuited for cats, sharks, bees, or trees. It seems vanishingly unlikely that sapient minds from other evolutions would also be suited for such an ethics, either. So it’s not universal, it’s not a code God wrote into everything. It’s just the best way to be a human . . . as humans exposed to it would in fact judge, because it’s fitted to us better than any of our current fumbling attempts.
What would that mean? How would the chicken learn or follow the ethics? Does it seem even remotely reasonable that social behavior among chickens and social behavior among humans should follow the same rules, given the inherent evolutionary differences in social structure and brain reward pathways?
It might be that CEV is impossible for humans, but there’s at least enough basic commonality to give it a chance of being possible.
Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don’t include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don’t include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.
The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.
Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can’t and won’t act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can’t and won’t act that way. If you’re constructing an ethics for humans follow, you have to start by figuring out humans.
It’s not until after you’ve figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.
Well, if humans can’t and won’t act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.
If we did model ethics after particular types of agent, here’s what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can’t morally compare types A and B.
But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than “objectively better”, we could be more clear by saying “more in line with our morals” or some such. It’s not as if our morals came from nowhere, after all.
Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we’d want them to have good lives or not (and I think it would see we do).
But chickens don’t (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we’d be left with some important human values, but also a lot of chicken feed.
Thanks, Benito. Do we know that we shouldn’t have a lot of chicken feed? My point in asking this is just that we’re baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers—I want to bake in my answers—but I’m just highlighting that it’s not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the “brain reward pathways” between humans and chickens aren’t that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.
Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us
It does not imply that there exists even one basic moral/ethical statement any human being would agree with, and to me that seems to be a requirement for any kind of humanity-wide system of ethics. Your ‘one size fits all’ approach does not convince me, and your reasoning seems superficial and based on words rather than actual logic.
All humans as they currently exist, no. But is there a system of ethics as a whole that humans, even currently disagreeing with some parts of it, would recognize as superior at doing what they really want from an ethical system that they would switch to it? Even in the main? Maybe, indeed, human ethics are so dependent on alleles that vary within the population and chance environmental factors that CEV is impossible. But there’s no solid evidence to require assuming that a priori, either.
By analogy, consider the person who in 1900 wanted to put together the ideal human diet. Obviously, the diets in different parts of the world differed from each other extensively, and merely averaging all of them that existed in 1900 would not be particularly conducive to finding an actual ideal diet. The person would have to do all the sorts of research that discovered the roles of various nutrients and micronutrients, et cetera. Indeed, he’d have to learn more than we currently do about them. And he’d have to work out the variations to react to various medical conditions, and he’d have to consider flavor (both innate response pathways and learned ones), et cetera. And then there’s the limit of what foods can be grown where, what shipping technologies exist, how to approximate the ideal diet in differing circumstances.
It would be difficult, but eventually you probably could put together a dietary program (including understood variations) that would, indeed, suit humans better than any of the existing diets in 1900, both in nutrition and pleasure. It wouldn’t suit sharks at all; it would not be a universal nutrition. But it would be an objectively determined diet just the same.
The problem with this diet is that it wouldn’t be a diet; it would be many different diets. Lots of people are lactose intolerant and it would be stupid to remove dairy products from the diet of those who are not. Likewise, a vegetarian diet is not a “variation” of a non-vegetarian diet.
Also, why are you talking about 1900?
Maybe, indeed, human ethics are so dependent on alleles that vary within the population and chance environmental factors that CEV is impossible. But there’s no solid evidence to require assuming that a priori, either.
I think the fact that humans can’t agree on even the most basic issues is pretty solid evidence. Also, even if everyone had the same subjective ethics, this still would result in objective contradictions. I’m not aware of any evidence that this problem is solvable at all.
Objective? Sure, without being universal.
Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us, which could be objectively determined by a thorough and competent enough analysis of humans. The edges of the bell curve on various factors might have certain variances. There might be a multi-modal distribution of fit (bimodal on men and women, for example), too. But, basically, one objective ethics for humans.
This ethics would clearly be unsuited for cats, sharks, bees, or trees. It seems vanishingly unlikely that sapient minds from other evolutions would also be suited for such an ethics, either. So it’s not universal, it’s not a code God wrote into everything. It’s just the best way to be a human . . . as humans exposed to it would in fact judge, because it’s fitted to us better than any of our current fumbling attempts.
Why not include primates, dolphins, rats, chickens, etc. into the ethics?
What would that mean? How would the chicken learn or follow the ethics? Does it seem even remotely reasonable that social behavior among chickens and social behavior among humans should follow the same rules, given the inherent evolutionary differences in social structure and brain reward pathways?
It might be that CEV is impossible for humans, but there’s at least enough basic commonality to give it a chance of being possible.
Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don’t include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don’t include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.
The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.
Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can’t and won’t act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can’t and won’t act that way. If you’re constructing an ethics for humans follow, you have to start by figuring out humans.
It’s not until after you’ve figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.
Well, if humans can’t and won’t act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.
If we did model ethics after particular types of agent, here’s what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can’t morally compare types A and B.
But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than “objectively better”, we could be more clear by saying “more in line with our morals” or some such. It’s not as if our morals came from nowhere, after all.
See also: “The Bedrock of Morality: Arbitrary?”
Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we’d want them to have good lives or not (and I think it would see we do).
But chickens don’t (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we’d be left with some important human values, but also a lot of chicken feed.
Thanks, Benito. Do we know that we shouldn’t have a lot of chicken feed? My point in asking this is just that we’re baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers—I want to bake in my answers—but I’m just highlighting that it’s not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the “brain reward pathways” between humans and chickens aren’t that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.
It does not imply that there exists even one basic moral/ethical statement any human being would agree with, and to me that seems to be a requirement for any kind of humanity-wide system of ethics. Your ‘one size fits all’ approach does not convince me, and your reasoning seems superficial and based on words rather than actual logic.
All humans as they currently exist, no. But is there a system of ethics as a whole that humans, even currently disagreeing with some parts of it, would recognize as superior at doing what they really want from an ethical system that they would switch to it? Even in the main? Maybe, indeed, human ethics are so dependent on alleles that vary within the population and chance environmental factors that CEV is impossible. But there’s no solid evidence to require assuming that a priori, either.
By analogy, consider the person who in 1900 wanted to put together the ideal human diet. Obviously, the diets in different parts of the world differed from each other extensively, and merely averaging all of them that existed in 1900 would not be particularly conducive to finding an actual ideal diet. The person would have to do all the sorts of research that discovered the roles of various nutrients and micronutrients, et cetera. Indeed, he’d have to learn more than we currently do about them. And he’d have to work out the variations to react to various medical conditions, and he’d have to consider flavor (both innate response pathways and learned ones), et cetera. And then there’s the limit of what foods can be grown where, what shipping technologies exist, how to approximate the ideal diet in differing circumstances.
It would be difficult, but eventually you probably could put together a dietary program (including understood variations) that would, indeed, suit humans better than any of the existing diets in 1900, both in nutrition and pleasure. It wouldn’t suit sharks at all; it would not be a universal nutrition. But it would be an objectively determined diet just the same.
The problem with this diet is that it wouldn’t be a diet; it would be many different diets. Lots of people are lactose intolerant and it would be stupid to remove dairy products from the diet of those who are not. Likewise, a vegetarian diet is not a “variation” of a non-vegetarian diet.
Also, why are you talking about 1900?
I think the fact that humans can’t agree on even the most basic issues is pretty solid evidence. Also, even if everyone had the same subjective ethics, this still would result in objective contradictions. I’m not aware of any evidence that this problem is solvable at all.