Cyan: ”...tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions.”
On the contrary, promotion into the future of a [complex, hierarchical] evolving model of values of increasing coherence over increasing context, would seem to be central to the topic of this essay.
Fundamentally, any system, through interaction with its immediate environment, always only expresses its values (its physical nature.) “Intention”, corresponding to “free-will” is merely derivative and for practical purposes in regard to this analysis of the system dynamics, is just “along for the ride.”
But to the extent that the system involves a reflexive model of its values—an inherently subjective view of its nature—then increasing effectiveness in principle, indirectly assessed in terms of observations of those values being promoted over increasing external scope of consequences, tends to correspond with increasing coherence of the (complex, hierarchical) inter-relationships of the elements within the model, over increasing context of meaning-making (increasing web of supporting evidence.) Wash, rinse, repeat with ongoing interaction --> selection for “that which tends to work” --> updating of the model...
“Morality” enters the picture only in regard to groups of agents. For a single, isolated, agent “morality” doesn’t apply; there is only the “good” of that which is assessed as promoting that agent’s (present, but evolving) values-complex. At the other end of the scale of subjectivity, in the god’s-eye view, there is no morality since all is simply and perfectly as it is.
But along that scale, regardless of the subjective starting point (whether human agency of various scale, other biological, or machine-phase agency) action will tend to be assessed as increasingly moral to the extent that it is assessed as promoting, in principle, (1) a subjective model of values increasingly coherent over increasing context (of meaning-making, evidential observation) over (2) increasing scope of objective consequences.
Evolutionary processes have encoded this accumulating “wisdom” slowly and painfully into the heuristics supporting the persistence of the physical, biological and cultural branch with which we self-identify. With the ongoing acceleration of the Red Queen’s Race, I see this meta-ethical theory becoming ever more explicitly applicable to “our” ongoing growth as intentional agents of whatever form or substrate.
Cyan: ”...limit the universe of discourse to actions which have predictable effects...”
I’m sorry, but my thinking is based almost entirely in systems and information theory, so when terms like “universe of discourse” appear, my post-modernism immune response kicks in and I find myself at a loss to continue. I really don’t know what to do with your last statement.
Cyan: ”...tangential to the point of the post, to wit, evolutionary adaptations can cause us to behave in ways that undermine our moral intentions.”
On the contrary, promotion into the future of a [complex, hierarchical] evolving model of values of increasing coherence over increasing context, would seem to be central to the topic of this essay.
Fundamentally, any system, through interaction with its immediate environment, always only expresses its values (its physical nature.) “Intention”, corresponding to “free-will” is merely derivative and for practical purposes in regard to this analysis of the system dynamics, is just “along for the ride.”
But to the extent that the system involves a reflexive model of its values—an inherently subjective view of its nature—then increasing effectiveness in principle, indirectly assessed in terms of observations of those values being promoted over increasing external scope of consequences, tends to correspond with increasing coherence of the (complex, hierarchical) inter-relationships of the elements within the model, over increasing context of meaning-making (increasing web of supporting evidence.) Wash, rinse, repeat with ongoing interaction --> selection for “that which tends to work” --> updating of the model...
“Morality” enters the picture only in regard to groups of agents. For a single, isolated, agent “morality” doesn’t apply; there is only the “good” of that which is assessed as promoting that agent’s (present, but evolving) values-complex. At the other end of the scale of subjectivity, in the god’s-eye view, there is no morality since all is simply and perfectly as it is.
But along that scale, regardless of the subjective starting point (whether human agency of various scale, other biological, or machine-phase agency) action will tend to be assessed as increasingly moral to the extent that it is assessed as promoting, in principle, (1) a subjective model of values increasingly coherent over increasing context (of meaning-making, evidential observation) over (2) increasing scope of objective consequences.
Evolutionary processes have encoded this accumulating “wisdom” slowly and painfully into the heuristics supporting the persistence of the physical, biological and cultural branch with which we self-identify. With the ongoing acceleration of the Red Queen’s Race, I see this meta-ethical theory becoming ever more explicitly applicable to “our” ongoing growth as intentional agents of whatever form or substrate.
Cyan: ”...limit the universe of discourse to actions which have predictable effects...”
I’m sorry, but my thinking is based almost entirely in systems and information theory, so when terms like “universe of discourse” appear, my post-modernism immune response kicks in and I find myself at a loss to continue. I really don’t know what to do with your last statement.