There is a confusion that results when you consider either system (objective or subjective ethics) from the viewpoint of the other.
(The objective ethical system viewpoint of human ethics.) Suppose that there is an objective ethical system defining a set of imperatives. Also, separately, we have subjectively determined human ethics. The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences. It is possible that the objective imperatives are not known to us, in which case, we may or may not be satisfying them and we are not aware of our objective value (good or bad).
(The subjective ethical system viewpoint of human ethics.) In the case of no objective ethical system, imperatives are subjectively collectively determined. We are bad or good—to whatever extent it is possible to be ‘bad’ or ‘good’—if we think we are bad or good. This is self-validation.
Now, to address your objections:
Human ethics aren’t even universal among humans. Plenty of humans live and have lived who would think I should rightly be killed—for not obeying some religious prescription, for instance. On the other hand some humans believe no-one should be killed and no-one has the right to kill anyone else, ever. Many more opinions exist.
Right, human ethics do seem very inconsistent. To me, this is a challenge only to the existence of subjective ethics. In the case of objective ethics, there is no contradiction if humans disagree about what is ethical; humans do not define what is objectively ethical. In the case of a subjective ethical system, inconsistencies in human ethics is evidence that there is no well-defined notion “human ethics”, only individual ethics.
Nevertheless, in defense of ‘human ethics’ for either system, perhaps it is the case that human ethics are actually consistent, in a way that matters, but the terminal values are so higher order we don’t easily find them. All the different moral behaviors we see are different manifestations of common values.
(2) I know of no reason why an AI couldn’t be built with different ethics from ours, or with no ethics at all. A paperclipper AI could be very intelligent, conscious (whatever that means), but sill—unethical by our lights. If anyone believes that such unethical minds literally cannot exist, the burden of proof is on them.
Of course, minds could evolve or be constructed with different subjective ethical systems. Again, they may or may not be objectively ethical.
The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences.
This redefinition of the word “imperative” goes counter to the existing meaning of the word (which would include all ‘preferences’), so it’s confusing. I suggest you come up with a new term or word-combination.
In the case of objective ethics, there is no contradiction if humans disagree about what is ethical; humans do not define what is objectively ethical.
You defined objective ethics as something every rational thinking being could derive. Shouldn’t it also have some meaning? Some reason why they would in fact be interested in deriving it?
If this objective ethics can be derived by everyone, but happens to run counter to almost everyone’s subjective ethics, why is it even interesting? Why would we even be talking about it unless we either expected to encounter aliens with subjective ethics similar to it; or we were considering adopting it as our own subjective ethics?
However, in defense of human ethics for either system, perhaps it is the case that human ethics are actually consistent, in a way that matters, but the terminal values are so higher order we don’t easily find them. All the different moral behaviors we see are different manifestations of common values.
That definitely requires proof. Have you got even a reason for speculating about it, any evidence for it?
You defined objective ethics as something every rational thinking being could derive.
Actually, I didn’t. I would be interested in AaronBenson’s answers to the questions that follow.
That definitely requires proof. Have you got even a reason for speculating about it, any evidence for it?
Here, I was just suggesting a solution. I don’t have much interest in the concept of ‘human’ ethics. (Like Jack, I would be very interested in what ethics are universal to all evolved, intelligent, social minds.)
… Yet I didn’t suggest it randomly. My evidence for it is that whenever someone seems to have a different ethical system from my own, I can usually eventually relate to it by finding a common value.
The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences.
This redefinition of the word “imperative” goes counter to the existing meaning of the word (which would include all ‘preferences’), so it’s confusing. I suggest you come up with a new term or word-combination.
I was using the the meaning of imperative as something you ‘ought’ to do, as in moral imperative. This does not include preferences unless you feel like you have a moral obligation to do what you prefer to do.
There is a confusion that results when you consider either system (objective or subjective ethics) from the viewpoint of the other.
(The objective ethical system viewpoint of human ethics.) Suppose that there is an objective ethical system defining a set of imperatives. Also, separately, we have subjectively determined human ethics. The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences. It is possible that the objective imperatives are not known to us, in which case, we may or may not be satisfying them and we are not aware of our objective value (good or bad).
(The subjective ethical system viewpoint of human ethics.) In the case of no objective ethical system, imperatives are subjectively collectively determined. We are bad or good—to whatever extent it is possible to be ‘bad’ or ‘good’—if we think we are bad or good. This is self-validation.
Now, to address your objections:
Right, human ethics do seem very inconsistent. To me, this is a challenge only to the existence of subjective ethics. In the case of objective ethics, there is no contradiction if humans disagree about what is ethical; humans do not define what is objectively ethical. In the case of a subjective ethical system, inconsistencies in human ethics is evidence that there is no well-defined notion “human ethics”, only individual ethics.
Nevertheless, in defense of ‘human ethics’ for either system, perhaps it is the case that human ethics are actually consistent, in a way that matters, but the terminal values are so higher order we don’t easily find them. All the different moral behaviors we see are different manifestations of common values.
Of course, minds could evolve or be constructed with different subjective ethical systems. Again, they may or may not be objectively ethical.
This redefinition of the word “imperative” goes counter to the existing meaning of the word (which would include all ‘preferences’), so it’s confusing. I suggest you come up with a new term or word-combination.
You defined objective ethics as something every rational thinking being could derive. Shouldn’t it also have some meaning? Some reason why they would in fact be interested in deriving it?
If this objective ethics can be derived by everyone, but happens to run counter to almost everyone’s subjective ethics, why is it even interesting? Why would we even be talking about it unless we either expected to encounter aliens with subjective ethics similar to it; or we were considering adopting it as our own subjective ethics?
That definitely requires proof. Have you got even a reason for speculating about it, any evidence for it?
Actually, I didn’t. I would be interested in AaronBenson’s answers to the questions that follow.
Here, I was just suggesting a solution. I don’t have much interest in the concept of ‘human’ ethics. (Like Jack, I would be very interested in what ethics are universal to all evolved, intelligent, social minds.)
… Yet I didn’t suggest it randomly. My evidence for it is that whenever someone seems to have a different ethical system from my own, I can usually eventually relate to it by finding a common value.
Right, sorry, that was AaronBensen’s definition.
I was using the the meaning of imperative as something you ‘ought’ to do, as in moral imperative. This does not include preferences unless you feel like you have a moral obligation to do what you prefer to do.