Ahh, but am I? Or am I a hufflepuff who does not base his value system on self-deception?
The original intent of the egg laid by hens was something to do with the reproduction of chickens. Yet as far as I’m concerned eggs are there to be separated white from yolk, whipped thoroughly and combined with the extract from artificially selected cane. Morals, ethics and values general are similar—what matters to me is not what the original intent was or causal factors but what my values happen to be right now. I get to choose which of my values I consider, well, part of ‘me’.
I note that maintaining the belief “the original intent of morality was to improve life”, or even “the intent of morality that can be inferred from human behavior is to improve life” is not necessarily a stable belief to hold. That is, exposure information from the world around them through either social observation or theoretical study will cause the belief to be discarded because it just isn’t, well, true. To refer to a well known exhortation by a source held here in disrepute: don’t build your house on sand!
OK, why do you think Harry is concerned with ethical behavior to all sentients?
It seems he took his intuitive value for ‘other thing that I can empathise with’ and applied it more generally than most. This is not a logical problem—there is a huge space of values that are internally coherent. Yet it does have implications that lead me to consider the values of Harry are only slightly preferable to those of Clippy. Optimizing the universe by those criteria would create an outcome that I personally (and I suggest most people) would not like all that much.
Ahh, but am I? Or am I a hufflepuff who does not base his value system on self-deception?
The original intent of the egg laid by hens was something to do with the reproduction of chickens. Yet as far as I’m concerned eggs are there to be separated white from yolk, whipped thoroughly and combined with the extract from artificially selected cane. Morals, ethics and values general are similar—what matters to me is not what the original intent was or causal factors but what my values happen to be right now. I get to choose which of my values I consider, well, part of ‘me’.
I note that maintaining the belief “the original intent of morality was to improve life”, or even “the intent of morality that can be inferred from human behavior is to improve life” is not necessarily a stable belief to hold. That is, exposure information from the world around them through either social observation or theoretical study will cause the belief to be discarded because it just isn’t, well, true. To refer to a well known exhortation by a source held here in disrepute: don’t build your house on sand!
It seems he took his intuitive value for ‘other thing that I can empathise with’ and applied it more generally than most. This is not a logical problem—there is a huge space of values that are internally coherent. Yet it does have implications that lead me to consider the values of Harry are only slightly preferable to those of Clippy. Optimizing the universe by those criteria would create an outcome that I personally (and I suggest most people) would not like all that much.