Watching the ensuing commentary, I’m drawn to wishfully imagine a highly advanced Musashi, wielding his high-dimensional blade of rationality such that in one stroke he delineates and separates the surrounding confusion from the nascent clarity. Of course no such vorpal katana could exist, for if it did, it would serve only to better clear the way for its successors.
I see a preponderance of viewpoints representing, in effect, the belief that “this is all well and good, but how will this guide me to the one true prior, from which Archimedian point one might judge True Value?”
I see some who, given a method for reliably discarding much which is not true, say scornfully in effect “How can this help me? It says nothing whatsoever about Truth itself!”
And then there are the few who recognize we are each like leaves of a tree rooted in reality, and while we should never expect exact agreement between our differing subjective models, we can most certainly expect increasing agreement—in principle—as we move toward the root of increasing probability, pragmatically supporting, rather than unrealistically affirming, the ongoing growth of branches of increasing possibility. [Ignoring the progressive enfeeblement of the branches necessitating not just growth but eventual transformation.]
Eliezer, I greatly appreciate the considerable time and effort you must put into your essays. Here are some suggested topics that might help reinforce and extend this line of thought:
Two communities, separated by a chasm
Would it be seen as better (perhaps obviously) to build a great bridge between them, or to consider the problem in terms of an abstract hierarchy of values, for example involving impediments to transfer of goods, people, … ultimately information, for which building a bridge is only a special-case solution? In general, is any goal not merely a special case (and utterly dependent on its specifiability) of values-promotion?
Fair division, etc.
Probably nearly all readers of Overcoming Bias are familiar with a principled approach to fair division of a cake into two pieces, and higher order solutions have been shown to be possible with attendant computational demands. Similarly, Rawles proposed that we ought to be satisfied with social choice implemented by best-known methods behind a veil of ignorance as to specific outcomes in relation to specific beneficiaries. Given the inherent uncertainty of specific future states within any evolving system of sufficient complexity to be of moral interest, what does this imply about shifting moral attention away from expected consequences, and toward increasingly effective principles reasonably optimizing our expectation of improving, but unspecified and indeed unspecifiable, consequences? Bonus question: How might this apply to Parfit’s Repugnant Conclusion and other well-recognized “paradoxes” of consequentialist utilitarianism?
Constraints essential for meaningful growth
Widespread throughout the “transhumanist” community appears the belief that considerable, if not indefinite progress can be attained via the “overcoming of constraints.” Paradoxically, the accelerating growth of possibilities that we experience arises not with overcoming constraints, but rather embracing them in ever-increasing technical detail. Meaningful growth is necessarily within an increasingly constrained possibility space—fortunately there’s plenty of fractal interaction area within any space of real numbers—while unconstrained growth is akin to a cancer. An effective understand of meaningful growth depends on an effective understanding of the subjective/objective dichotomy.
Watching the ensuing commentary, I’m drawn to wishfully imagine a highly advanced Musashi, wielding his high-dimensional blade of rationality such that in one stroke he delineates and separates the surrounding confusion from the nascent clarity. Of course no such vorpal katana could exist, for if it did, it would serve only to better clear the way for its successors.
I see a preponderance of viewpoints representing, in effect, the belief that “this is all well and good, but how will this guide me to the one true prior, from which Archimedian point one might judge True Value?”
I see some who, given a method for reliably discarding much which is not true, say scornfully in effect “How can this help me? It says nothing whatsoever about Truth itself!”
And then there are the few who recognize we are each like leaves of a tree rooted in reality, and while we should never expect exact agreement between our differing subjective models, we can most certainly expect increasing agreement—in principle—as we move toward the root of increasing probability, pragmatically supporting, rather than unrealistically affirming, the ongoing growth of branches of increasing possibility. [Ignoring the progressive enfeeblement of the branches necessitating not just growth but eventual transformation.]
Eliezer, I greatly appreciate the considerable time and effort you must put into your essays. Here are some suggested topics that might help reinforce and extend this line of thought:
Two communities, separated by a chasm Would it be seen as better (perhaps obviously) to build a great bridge between them, or to consider the problem in terms of an abstract hierarchy of values, for example involving impediments to transfer of goods, people, … ultimately information, for which building a bridge is only a special-case solution? In general, is any goal not merely a special case (and utterly dependent on its specifiability) of values-promotion?
Fair division, etc. Probably nearly all readers of Overcoming Bias are familiar with a principled approach to fair division of a cake into two pieces, and higher order solutions have been shown to be possible with attendant computational demands. Similarly, Rawles proposed that we ought to be satisfied with social choice implemented by best-known methods behind a veil of ignorance as to specific outcomes in relation to specific beneficiaries. Given the inherent uncertainty of specific future states within any evolving system of sufficient complexity to be of moral interest, what does this imply about shifting moral attention away from expected consequences, and toward increasingly effective principles reasonably optimizing our expectation of improving, but unspecified and indeed unspecifiable, consequences? Bonus question: How might this apply to Parfit’s Repugnant Conclusion and other well-recognized “paradoxes” of consequentialist utilitarianism?
Constraints essential for meaningful growth Widespread throughout the “transhumanist” community appears the belief that considerable, if not indefinite progress can be attained via the “overcoming of constraints.” Paradoxically, the accelerating growth of possibilities that we experience arises not with overcoming constraints, but rather embracing them in ever-increasing technical detail. Meaningful growth is necessarily within an increasingly constrained possibility space—fortunately there’s plenty of fractal interaction area within any space of real numbers—while unconstrained growth is akin to a cancer. An effective understand of meaningful growth depends on an effective understanding of the subjective/objective dichotomy.
Thanks again for your substantial efforts.