For evolution in general, this is obviously pattern measure, and truly can not be anything else.
This sure sounds like my attempt elsewhere to describe your position:
There’s no such thing as misalignment. There’s one overarching process, call it evolution or whatever you like, and this process goes through stages of creating new things along new dimensions, but all the stages are part of the overall process. Anything called “misalignment” is describing the relationship of two parts or stages that are contained in the overarching process. The overarching process is at a higher level than that misalignment relationship, and the misalignment helps compute the overarching process.
One evolutionary process but many potential competing sub-components. Of course there is always misalignment.
The implied optimization gradient of any two different components of the system can never be perfectly aligned (as otherwise they wouldn’t be different).
The foom doom argument is that humanity and AGI will be very misaligned such that the latter’s rise results in the extinction of the former.
The analogy from historical evolution is the misalignment between human genes and human minds, where the rise of the latter did not result in extinction of the former. It plausibly could have, but that is not what we observe.
The analogy from historical evolution is the misalignment between human genes and human minds, where the rise of the latter did not result in extinction of the former. It plausibly could have, but that is not what we observe.
The analogy is that the human genes thing produces a thing (human minds) which wants stuff, but the stuff it wants is different from what what the human genes want. From my perspective you’re strawmanning and failing to track the discourse here to a sufficient degree that I’m bowing out.
The analogy is that the human genes thing produces a thing (human minds) which wants stuff, but the stuff it wants is different from what what the human genes want.
Not nearly different enough to prevent the human genes from getting what they want in excess.
If we apply your frame of the analogy to AGI, we have slightly misaligned AGI which doesn’t cause human extinction, and instead enormously amplifies our utility.
From my perspective you’re strawmanning and failing to track the discourse here to a sufficient degree that I’m bowing out
From my perspective you persistently ignore, misunderstand, or misrepresent my arguments, overfocus on pedantic details, and refuse to update or agree on basics.
This sure sounds like my attempt elsewhere to describe your position:
Which you dismissed.
One evolutionary process but many potential competing sub-components. Of course there is always misalignment.
The implied optimization gradient of any two different components of the system can never be perfectly aligned (as otherwise they wouldn’t be different).
The foom doom argument is that humanity and AGI will be very misaligned such that the latter’s rise results in the extinction of the former.
The analogy from historical evolution is the misalignment between human genes and human minds, where the rise of the latter did not result in extinction of the former. It plausibly could have, but that is not what we observe.
The analogy is that the human genes thing produces a thing (human minds) which wants stuff, but the stuff it wants is different from what what the human genes want. From my perspective you’re strawmanning and failing to track the discourse here to a sufficient degree that I’m bowing out.
Not nearly different enough to prevent the human genes from getting what they want in excess.
If we apply your frame of the analogy to AGI, we have slightly misaligned AGI which doesn’t cause human extinction, and instead enormously amplifies our utility.
From my perspective you persistently ignore, misunderstand, or misrepresent my arguments, overfocus on pedantic details, and refuse to update or agree on basics.