“Self-improvement” is one of those things which most humans can nod along to, but only because we’re all assigning different meanings to it. Some people will read “self-improvement” and think self-help books, individual spiritual growth, etc.; some will think “transhumanist self-alteration of the mind and body”; some will think “improvement of the social structure of humanity even if individual humans remain basically the same”; etc.
It looks like a non-controversial thing to include on the list, but that’s basically an optical illusion.
For those same reasons, it is much too broad to be programmed into an AGI as-is without horrifying consequences. The A.I. settling on “maximise human biological self-engineering” and deciding to nudge extremist eugenicists into positions of power is, like, one of the optimistic scenarios for how well that could go. I’m sure you can theoretically define “self-improvement” in ways that don’t lead to horrifying scenarios, but then we’re just back to Square 1 of having to think harder about what moral parameters to set rather than boiling it all down to an allegedly “simple” goal like “human self-improvement”.
The operalationalization would indeed be the next step. I disagree the first step is meaningless without it though. E.g. having some form of self-improvement in the goal set is important as we want to do more than just survive as a species.
“Self-improvement” is one of those things which most humans can nod along to, but only because we’re all assigning different meanings to it. Some people will read “self-improvement” and think self-help books, individual spiritual growth, etc.; some will think “transhumanist self-alteration of the mind and body”; some will think “improvement of the social structure of humanity even if individual humans remain basically the same”; etc.
It looks like a non-controversial thing to include on the list, but that’s basically an optical illusion.
For those same reasons, it is much too broad to be programmed into an AGI as-is without horrifying consequences. The A.I. settling on “maximise human biological self-engineering” and deciding to nudge extremist eugenicists into positions of power is, like, one of the optimistic scenarios for how well that could go. I’m sure you can theoretically define “self-improvement” in ways that don’t lead to horrifying scenarios, but then we’re just back to Square 1 of having to think harder about what moral parameters to set rather than boiling it all down to an allegedly “simple” goal like “human self-improvement”.
The operalationalization would indeed be the next step. I disagree the first step is meaningless without it though. E.g. having some form of self-improvement in the goal set is important as we want to do more than just survive as a species.