==
(Footnotes to the above: formatting on them got screwy)
(1) The whole meta-ethics Sequence is shot through with the idea that compromise on instrumental values is possible given shared terminal values, even if it doesn’t seem that way at first, so humans can coexist and extracting a “coherent volition” of humanity is possible, but entities with different terminal values are varelse: there’s just no point of compatibility.
The recurring message is that any notion of compromise on terminal values is just wrongheaded, which is why the SHFP’s solution to the Babykiller problem is presented as flawed, as is viewing the Pebblesorters as having a notion of right and wrong deserving of moral consideration. Implementing our instrumental values can leave us tragically happy, on this view, because our terminal values are the ones that really matter.
More generally, LW’s formulation of post-Singularity ethics (aka Fun) seems to depend on this distinction. The idea of a reflectively stable shared value system that can survive a radical alteration of our environment (e.g, the ability to create arbitrary numbers of systems with the same moral weight that I have, or even mere immortality) is pretty fundamental, not just for the specific Fun Theory proposed, but for any fixed notion of what humans would find valuable after such a transition. If I don’t have a stable value system in the first place, or if my stable values are fundamentally incompatible with yours, then the whole enterprise is a non-starter… and clearly our instrumental values are neither stable nor shared. So the hope that our terminal values are stable and shared is important.
This distinction also may underlie the warning against messing with emotions… the idea seems to be that messing with emotions, unlike messing with everything else, risks affecting my terminal values. (I may be pounding that screw with my hammer, though; I’m still not confident I understand why EY thinks messing with everything else is so much safer than messing with emotions.)
(2) I feel I should clarify here that my husband and I are happily married; this is entirely a hypothetical example. Also, my officemate recently brought me chocolate without my even having to leave my cube, let alone drive anywhere. Truly, I live a blessed life.
(3) Mind you, I don’t have one handy. But the longest journey begins, not with a single step, but with the formation of the desire to get somewhere.
== (Footnotes to the above: formatting on them got screwy)
(1) The whole meta-ethics Sequence is shot through with the idea that compromise on instrumental values is possible given shared terminal values, even if it doesn’t seem that way at first, so humans can coexist and extracting a “coherent volition” of humanity is possible, but entities with different terminal values are varelse: there’s just no point of compatibility.
The recurring message is that any notion of compromise on terminal values is just wrongheaded, which is why the SHFP’s solution to the Babykiller problem is presented as flawed, as is viewing the Pebblesorters as having a notion of right and wrong deserving of moral consideration. Implementing our instrumental values can leave us tragically happy, on this view, because our terminal values are the ones that really matter.
More generally, LW’s formulation of post-Singularity ethics (aka Fun) seems to depend on this distinction. The idea of a reflectively stable shared value system that can survive a radical alteration of our environment (e.g, the ability to create arbitrary numbers of systems with the same moral weight that I have, or even mere immortality) is pretty fundamental, not just for the specific Fun Theory proposed, but for any fixed notion of what humans would find valuable after such a transition. If I don’t have a stable value system in the first place, or if my stable values are fundamentally incompatible with yours, then the whole enterprise is a non-starter… and clearly our instrumental values are neither stable nor shared. So the hope that our terminal values are stable and shared is important.
This distinction also may underlie the warning against messing with emotions… the idea seems to be that messing with emotions, unlike messing with everything else, risks affecting my terminal values. (I may be pounding that screw with my hammer, though; I’m still not confident I understand why EY thinks messing with everything else is so much safer than messing with emotions.)
(2) I feel I should clarify here that my husband and I are happily married; this is entirely a hypothetical example. Also, my officemate recently brought me chocolate without my even having to leave my cube, let alone drive anywhere. Truly, I live a blessed life.
(3) Mind you, I don’t have one handy. But the longest journey begins, not with a single step, but with the formation of the desire to get somewhere.