With kind apologies, this section seems surprisingly lacking in the essential natural philosophy and emotional factors contributing to the agency & goals of biological organisms, which are the only entities we’re aware of thus far who’ve developed agency, and therefore whose traits must be inextricable from even abstract conversation.
(Such as the evolved brain/body connections of emotion/pain/pleasure that feeling creatures have, like a sense of warmth and happiness from positive community and group engagements, compassion for others, etc).
Especially in how we develop Consequentialism, Scale, & Planning as relates to our self-preservation instincts, and the connection thereof to an innate understanding of how deeply we depend on our ecosystems and the health of everything else, for our own well-being.
(It seems safe to predict that as such, biological ecosystems with feeling agents are the only ones which could mutually self-sustain by default on Earth, as opposed to simply using everything up and grinding the whole thing to a stop.
That, or subsuming the current biology and replacing it with something else entirely, which is techno-integrative, but still obviating of us.
Especially if powerful-enough free agents did not feel a concern for self-preservation via their mutual inter-dependency on all other living things, nor a deep appreciation for life and its organisms for its own sake, nor at least any simulation of pain & pleasure in response to positive and negative impacts thereon).
Merely defining agency as your six factors without any emotional component whatsoever,
and goals as mere endpoints devoid of any alignments with natural philosophy,
is a very hollow, superficial, and fragile approach not just predicated on oversights (the omissions of which can have very harmful repercussions),
but in terms of safety assessment, negligent of the fact that it may even be an advantage for AGI in subsuming us on autopilot, to *not develop agency to the extent you’ve defined it here.
Lastly of course, in assessing safety, it also appears you’ve omitted the eventuality of intentionally malevolent human actors.
Some key assumptions and omissions here, very respectfully.
With kind apologies, this section seems surprisingly lacking in the essential natural philosophy and emotional factors contributing to the agency & goals of biological organisms, which are the only entities we’re aware of thus far who’ve developed agency, and therefore whose traits must be inextricable from even abstract conversation.
(Such as the evolved brain/body connections of emotion/pain/pleasure that feeling creatures have, like a sense of warmth and happiness from positive community and group engagements, compassion for others, etc).
Especially in how we develop Consequentialism, Scale, & Planning as relates to our self-preservation instincts, and the connection thereof to an innate understanding of how deeply we depend on our ecosystems and the health of everything else, for our own well-being.
(It seems safe to predict that as such, biological ecosystems with feeling agents are the only ones which could mutually self-sustain by default on Earth, as opposed to simply using everything up and grinding the whole thing to a stop.
That, or subsuming the current biology and replacing it with something else entirely, which is techno-integrative, but still obviating of us.
Especially if powerful-enough free agents did not feel a concern for self-preservation via their mutual inter-dependency on all other living things, nor a deep appreciation for life and its organisms for its own sake, nor at least any simulation of pain & pleasure in response to positive and negative impacts thereon).
Merely defining agency as your six factors without any emotional component whatsoever,
and goals as mere endpoints devoid of any alignments with natural philosophy,
is a very hollow, superficial, and fragile approach not just predicated on oversights (the omissions of which can have very harmful repercussions),
but in terms of safety assessment, negligent of the fact that it may even be an advantage for AGI in subsuming us on autopilot, to *not develop agency to the extent you’ve defined it here.
Lastly of course, in assessing safety, it also appears you’ve omitted the eventuality of intentionally malevolent human actors.
Some key assumptions and omissions here, very respectfully.