If you regard those who are not rational as ‘our enemies’, then I suppose that reasoning holds.
a Utilitarian, considering what’s best in the long-term, would certainly prefer people who’ve managed to be made more fit by the truth—delusion is clearly more costly ceteris paribus.
Anyone who accepts an egoistic ethics should accept that the mere fact of them being ‘enemies’ is enough to want them less fit.
Kantians value truth for obvious reasons. Lying is probably the only act to which Kant successfully applied the categorical imperative.
Of course, a certain sort of Altruist might think that making people feel nice now is worth… well, they’d probably stop thinking at that point.
But even given all this, as it turns out I’m one of the ordinary humans that’s aided by placebos, and don’t regard humans as the ‘enemy’. So I’m in favor of placebos, for now. Though I’m doubly in favor of altering human cognitive architecture so that the truth works even better.
If you regard those who are not rational as ‘our enemies’, then I suppose that reasoning holds.
a Utilitarian, considering what’s best in the long-term, would certainly prefer people who’ve managed to be made more fit by the truth—delusion is clearly more costly ceteris paribus.
Anyone who accepts an egoistic ethics should accept that the mere fact of them being ‘enemies’ is enough to want them less fit.
Kantians value truth for obvious reasons. Lying is probably the only act to which Kant successfully applied the categorical imperative.
Of course, a certain sort of Altruist might think that making people feel nice now is worth… well, they’d probably stop thinking at that point.
But even given all this, as it turns out I’m one of the ordinary humans that’s aided by placebos, and don’t regard humans as the ‘enemy’. So I’m in favor of placebos, for now. Though I’m doubly in favor of altering human cognitive architecture so that the truth works even better.