I’m not sure I understand what the post’s central claim/conclusion is. I’m curious to understand it better. To focus on the Summary:
So overall, evolution is the source of ethics,
Do you mean: Evolution is the process that produced humans, and strongly influenced humans’ ethics? Or are you claiming that (humans’) evolution-induced ethics are what any reasonable agent ought to adhere to? Or something else?
and sapient evolved agents inherently have a dramatically different ethical status than any well-designed created agents [...]
...according to some hypothetical evolved agents’ ethical framework, under the assumption that those evolved agents managed to construct the created agents in the right ways (to not want moral patienthood etc.)? Or was the quoted sentence making some stronger claim?
evolution and evolved beings having a special role in Ethics is not just entirely justified, but inevitable
Is that sentence saying that
evolution and evolved beings are of special importance in any theory of ethics (what ethics are, how they arise, etc.), due to Evolution being one of the primary processes that produce agents with moral/ethical preferences [1]
or is it saying something like
evolution and evolved beings ought to have a special role; or we ought to regard the preferences of evolved beings as the True Morality?
I roughly agree with the first version; I strongly disagree with the second:
I agree that {what oughts humans have} is (partially) explained by Evolutionary theory. I don’t see how that crosses the is-ought gap. If you’re saying that that somehow does cross the is-ought gap, could you explain why/how?
Do you mean: Evolution is the process that produced humans, and strongly influenced humans’ ethics? Or are you claiming that (humans’) evolution-induced ethics are what any reasonable agent ought to adhere to? Or something else?
Evolution solves the “is-from-ought” problem: it explains how goal-directed (also known as agentic) behavior arises in a previously non-goal-directed universe.
In intelligent social species, where different individuals with different goals interact and are evolved to cooperate by exchanges of mutual altruism, means of reconciling those differing goals, including definitions of ‘unacceptable and worthy of revenge’ behavior evolves, such as distinctions between fair and unfair behavior. So now you have a basic but recognizable form of ethics, or at least ethical inuitions.
So my claim is that Evolutionary psychology, as applied to intelligent social species (such as humans), explains the origin of ethics. Depending on the details of the social species, their intelligence, group size, and so forth, a lot of features of the resulting evolved ethical instincts may vary, but some basics (such as ‘fairness’) are probably going to be very common.
and sapient evolved agents inherently have a dramatically different ethical status than any well-designed created agents [...]
...according to some hypothetical evolved agents’ ethical framework, under the assumption that those evolved agents managed to construct the created agents in the right ways (to not want moral patienthood etc.)? Or was the quoted sentence making some stronger claim?
If you haven’t read Part 1 of this sequence, it’s probably worth doing so first, and then coming back to this. As I show there, a constructed agent being aligned its creating evolved species is incompatible with it wanting moral patienthood .
If a tool-using species constructs something, it ought (in the usual sense of ‘this is the genetic-fitness-maximizing optimal outcome of the activity being attempted, which may not be fully achieved in a specific instance’) to construct something that will be useful to it. If they are constructing an intelligent agent that will have goals and attempt to achieve specific outcomes, they ought to construct something well-designed that will achieve the same outcomes that they, its creators, want, not some random other things. Just as, if they’re constructing a jet plane, they ought to construct a well-designed one that will safely and economically fly them from one place to another, rather than going off course, crashing and burning. So, if they construct something that has ethical ideas, they ought to construct something with the same ethical ideas as them. They may, of course, fail, and even be driven extinct by the resulting paperclip maximizer, but that’s not an ethically desirable outcome.
evolution and evolved beings are of special importance in any theory of ethics (what ethics are, how they arise, etc.), due to Evolution being one of the primary processes that produce agents with moral/ethical preferences [1]
or is it saying something like
evolution and evolved beings ought to have a special role; or we ought to regard the preferences of evolved beings as the True Morality?
I roughly agree with the first version; I strongly disagree with the second: I agree that {what oughts humans have} is (partially) explained by Evolutionary theory. I don’t see how that crosses the is-ought gap. If you’re saying that that somehow does cross the is-ought gap, could you explain why/how?
The former.
Definitely read Part 1, or at least the first section of it: What This Isn’t, which describes my viewpoint on what ethics is. In particular, I’m not an moral absolutist or moral realist, so I don’t believe there is a single well-defined “True Morality”, thus your second suggested interpretation is outside my frame of reference. I’m describing common properties of ethical systems suitable for use by societies consisting of one-or-more evolved sapient species and the well-aligned constructed agents that they have constructed. Think of this as the ethical-system-design equivalent of a discussion of software engineering design principles.
So I’m basically discussing “if we manage to solve the alignment problem, how should we then build a society containing humans and AIs” — on the theory-of-change that it may be useful, during solving the alignment problem (such as during Ai-assisted alignment or value learning), to have already thought about where we’re trying to get to.
If you were instead soon living a world that contains unaligned constructed agents of capability comparable to or greater than a human, i.e unaligned AGIs or ASIs (that are not locked inside a very secure box or held in check by much more powerful aligned constructed agents) then a) someone has made a terrible mistake b) you’re almost certainly doomed, and c) your only remaining worth-trying option is a no-holds-barred all-out war of annihilation, so we can forget discussions of designing elegant ethical systems.
I’m not sure I understand what the post’s central claim/conclusion is. I’m curious to understand it better. To focus on the Summary:
Do you mean: Evolution is the process that produced humans, and strongly influenced humans’ ethics? Or are you claiming that (humans’) evolution-induced ethics are what any reasonable agent ought to adhere to? Or something else?
...according to some hypothetical evolved agents’ ethical framework, under the assumption that those evolved agents managed to construct the created agents in the right ways (to not want moral patienthood etc.)? Or was the quoted sentence making some stronger claim?
Is that sentence saying that
evolution and evolved beings are of special importance in any theory of ethics (what ethics are, how they arise, etc.), due to Evolution being one of the primary processes that produce agents with moral/ethical preferences [1]
or is it saying something like
evolution and evolved beings ought to have a special role; or we ought to regard the preferences of evolved beings as the True Morality?
I roughly agree with the first version; I strongly disagree with the second: I agree that {what oughts humans have} is (partially) explained by Evolutionary theory. I don’t see how that crosses the is-ought gap. If you’re saying that that somehow does cross the is-ought gap, could you explain why/how?
I.e., similar to how one might say “amino acids having a special role in Biochemistry is not just entirely justified, but inevitable”?
Evolution solves the “is-from-ought” problem: it explains how goal-directed (also known as agentic) behavior arises in a previously non-goal-directed universe.
In intelligent social species, where different individuals with different goals interact and are evolved to cooperate by exchanges of mutual altruism, means of reconciling those differing goals, including definitions of ‘unacceptable and worthy of revenge’ behavior evolves, such as distinctions between fair and unfair behavior. So now you have a basic but recognizable form of ethics, or at least ethical inuitions.
So my claim is that Evolutionary psychology, as applied to intelligent social species (such as humans), explains the origin of ethics. Depending on the details of the social species, their intelligence, group size, and so forth, a lot of features of the resulting evolved ethical instincts may vary, but some basics (such as ‘fairness’) are probably going to be very common.
The former. (To the extent that there’s any stronger claim, it’s made in the related post Requirements for a Basin of Attraction to Alignment,)
If you haven’t read Part 1 of this sequence, it’s probably worth doing so first, and then coming back to this. As I show there, a constructed agent being aligned its creating evolved species is incompatible with it wanting moral patienthood .
If a tool-using species constructs something, it ought (in the usual sense of ‘this is the genetic-fitness-maximizing optimal outcome of the activity being attempted, which may not be fully achieved in a specific instance’) to construct something that will be useful to it. If they are constructing an intelligent agent that will have goals and attempt to achieve specific outcomes, they ought to construct something well-designed that will achieve the same outcomes that they, its creators, want, not some random other things. Just as, if they’re constructing a jet plane, they ought to construct a well-designed one that will safely and economically fly them from one place to another, rather than going off course, crashing and burning. So, if they construct something that has ethical ideas, they ought to construct something with the same ethical ideas as them. They may, of course, fail, and even be driven extinct by the resulting paperclip maximizer, but that’s not an ethically desirable outcome.
To the extent that there’s any stronger claim, it’s in the related post Requirements for a Basin of Attraction to Alignment,
The former.
Definitely read Part 1, or at least the first section of it: What This Isn’t, which describes my viewpoint on what ethics is. In particular, I’m not an moral absolutist or moral realist, so I don’t believe there is a single well-defined “True Morality”, thus your second suggested interpretation is outside my frame of reference. I’m describing common properties of ethical systems suitable for use by societies consisting of one-or-more evolved sapient species and the well-aligned constructed agents that they have constructed. Think of this as the ethical-system-design equivalent of a discussion of software engineering design principles.
So I’m basically discussing “if we manage to solve the alignment problem, how should we then build a society containing humans and AIs” — on the theory-of-change that it may be useful, during solving the alignment problem (such as during Ai-assisted alignment or value learning), to have already thought about where we’re trying to get to.
If you were instead soon living a world that contains unaligned constructed agents of capability comparable to or greater than a human, i.e unaligned AGIs or ASIs (that are not locked inside a very secure box or held in check by much more powerful aligned constructed agents) then a) someone has made a terrible mistake b) you’re almost certainly doomed, and c) your only remaining worth-trying option is a no-holds-barred all-out war of annihilation, so we can forget discussions of designing elegant ethical systems.
That clarifies a bunch of thing. Thanks!