I argued repeatedly and at length on the Extropian and Transhumanist discussion lists from 2004 to about 2010 for a metaethics based on the idea that actions assessed as increasingly “moral” (right in principle) are those actions assessed as promoting (1) values, hierarchical and fine-grained, increasingly coherent over an increasing context of meaning-making, via (2) instrumental methods, increasingly effective in principle, over increasing scope of consequences. Lather, rinse, repeat, with consequences tending to select for values, and methods for their promotion, that “work” (meaning “persist”)
The instrumental methods half of this—the growth in scope of our model of of science and technology—is generally well-accepted.
The values half of this—the growth in context of our model of meaning-making—not so much, for a handful of understandable reasons of our developmental history.
Together, these orthogonal aspects tend to support and reinforce meaningful growth.
The Arrow of Morality points in no particular direction but outward—with increasing coherence over increasing context—and suggests we would do well to act intentionally to promote growth of our models in these two orthogonal dimensions.
Conceptual roadblocks include difficulties with evolutionary dynamics (including multi-level), synergistic (anti-entropic) expansion in both dimensions mentioned above (from the point of view of any agent), agency as inherently perspectival (subjective, but not arbitrary), and unwillingness to accept an ever-broadening indentification of “self”.
Due (in my opinion) to these difficult and culturally pervasive conceptual roadblocks, I never gained much traction in my attempts to convey and test this thinking, and I eventually decided to stop beating a horse that was not so much dead, as had never really lived. I believe we’ll make progress on this, two steps forward, one step back, to the extent we live and learn and become more ready. [Which is by no means guaranteed...]
I have not found any substantial literature supporting this thinking, but I can point you in the direction of bits and pieces, and we might discuss further (work and family permitting) if you would like to contact me privately.
I argued repeatedly and at length on the Extropian and Transhumanist discussion lists from 2004 to about 2010 for a metaethics based on the idea that actions assessed as increasingly “moral” (right in principle) are those actions assessed as promoting (1) values, hierarchical and fine-grained, increasingly coherent over an increasing context of meaning-making, via (2) instrumental methods, increasingly effective in principle, over increasing scope of consequences. Lather, rinse, repeat, with consequences tending to select for values, and methods for their promotion, that “work” (meaning “persist”)
The instrumental methods half of this—the growth in scope of our model of of science and technology—is generally well-accepted.
The values half of this—the growth in context of our model of meaning-making—not so much, for a handful of understandable reasons of our developmental history.
Together, these orthogonal aspects tend to support and reinforce meaningful growth.
The Arrow of Morality points in no particular direction but outward—with increasing coherence over increasing context—and suggests we would do well to act intentionally to promote growth of our models in these two orthogonal dimensions.
Conceptual roadblocks include difficulties with evolutionary dynamics (including multi-level), synergistic (anti-entropic) expansion in both dimensions mentioned above (from the point of view of any agent), agency as inherently perspectival (subjective, but not arbitrary), and unwillingness to accept an ever-broadening indentification of “self”.
Due (in my opinion) to these difficult and culturally pervasive conceptual roadblocks, I never gained much traction in my attempts to convey and test this thinking, and I eventually decided to stop beating a horse that was not so much dead, as had never really lived. I believe we’ll make progress on this, two steps forward, one step back, to the extent we live and learn and become more ready. [Which is by no means guaranteed...]
I have not found any substantial literature supporting this thinking, but I can point you in the direction of bits and pieces, and we might discuss further (work and family permitting) if you would like to contact me privately.
Jef
Oh, and a short, possibly more direct response:
Values (within context) lead to preferences; preferences (within context) lead to actions; and actions (within context) lead to consequences.
Lather, rinse, repeat, updating your models of what matters and what works as you go.