I think it’s not binary. The ‘humane’ approach is focusing on the endgame, the ‘moralist’ on a tactic he thinks will get there. Strategy not mated to tactics is futile, so I think the Taoist in this example could be faulted for being naive: can’t we all just get along? Clearly, some ethics, habits, are better suited towards humaneness than others. But the problem with the tacticians, the moralists, is that they are often wrong: their practices won’t get to the objective very well(think about the poor Raelians). Indeed, any sufficiently comprehensive set of tactics will be wrong, and any right set of tactics will be incomplete.
Thus, I think it’s wise to think about a good endgame, what gives your life meaning, satisfaction, and pleasure, but just as important is to think about specific rules that maximizing these objectives. You will certainly not pick the optimum out of ignorance and the difficulty of the problem, and so you will always be ‘wrong’, especially with hindsight, on both target and tactics, but that should not lead to nihilism, rather, apply your intelligence: learn throughout your life. By the time we die, we still won’t have it exactly right, but good enough for this self-aware-subsystem.
It seems to me that the humane approach is endorsing a tactic of avoiding assessing everything in terms of the endgame; think of Tit-For-Tat, whose tactic completely ignores the opponent’s predicted response. Of course, you need to think tactically in order to choose a tactic, but it could be counterproductive to continue taking it into account, perhaps due to limited time, computational resources, or something more arcane like Omega reading your mind.
Or, of course, because you run on hostile hardware.
(I do not necessarily endorse the “humane” position in that fictional debate.)
I think it’s not binary. The ‘humane’ approach is focusing on the endgame, the ‘moralist’ on a tactic he thinks will get there. Strategy not mated to tactics is futile, so I think the Taoist in this example could be faulted for being naive: can’t we all just get along? Clearly, some ethics, habits, are better suited towards humaneness than others. But the problem with the tacticians, the moralists, is that they are often wrong: their practices won’t get to the objective very well(think about the poor Raelians). Indeed, any sufficiently comprehensive set of tactics will be wrong, and any right set of tactics will be incomplete.
Thus, I think it’s wise to think about a good endgame, what gives your life meaning, satisfaction, and pleasure, but just as important is to think about specific rules that maximizing these objectives. You will certainly not pick the optimum out of ignorance and the difficulty of the problem, and so you will always be ‘wrong’, especially with hindsight, on both target and tactics, but that should not lead to nihilism, rather, apply your intelligence: learn throughout your life. By the time we die, we still won’t have it exactly right, but good enough for this self-aware-subsystem.
I see what you mean, but there is still something to the observation that people who obsess a lot about morality generally aren’t very pleasant.
It seems to me that the humane approach is endorsing a tactic of avoiding assessing everything in terms of the endgame; think of Tit-For-Tat, whose tactic completely ignores the opponent’s predicted response. Of course, you need to think tactically in order to choose a tactic, but it could be counterproductive to continue taking it into account, perhaps due to limited time, computational resources, or something more arcane like Omega reading your mind.
Or, of course, because you run on hostile hardware.
(I do not necessarily endorse the “humane” position in that fictional debate.)