I don’t think you really run the risk of becoming less human through rationality at all. You use the example of a paperclip maximizer, but that arises due to a fundamentally different set of core values. A rational human, on the other hand, retains human values; that’s a big emphasis, in fact—that rational doesn’t mean Vulcan. I guess one could stray from common human values, but I don’t see how just an increase in rationality could do this—it’s just the tool that serves our desires and motivations, whatever they might be.
I think the only danger would be coming to a mistaken conclusion (like “exterminate all humans”) and then, because of a desire to be rational, sticking rigidly to it and thus inadvertantly causing damage as efficiently as possible. But one would hope aspiring rationalists also learn to be flexible and cautious enough that this would not happen.
A conflict between what rationality tells you is right and what you feel is right seems like a somewhat more common situation. (I would always take note of this, keeping in mind paragraph number two, because the feeling is there for a reason. That doesn’t mean it’s right, though.) This conflict arises, I think, when we take principles based on these core values—like “pleasure good”—and extrapolate them further than human intuition was ever required to go; dealing with very large numbers, for instance, or abstract situations that would normally be covered by a snap judgment.
Thus we reach conclusions that may seem odd at first, but we’re not just creatures of intuition and emotion—we also have the ability to think logically, and eventually change our minds and accept new things or views. So if you can explain your morality in a way that is acceptable to the rational human, then it isn’t really becoming less human at all.
Our “human intuition” is not always correct, anyway. (In fact, I personally would go so far as to say that any rational being with human experiences should arrive at a morality similar to utilitarianism, and thus becoming more rational just means one arrives at this conclusion more quickly, but that’s another debate.) You bring up a very interesting and relevant topic, though.
As for empathy—I don’t think becoming more rational means having less empathy for “irrational human experiences”! For one, what makes them irrational? There’s nothing inherently rational or irrational about tasting a delicious pastry!
I don’t think you really run the risk of becoming less human through rationality at all. You use the example of a paperclip maximizer, but that arises due to a fundamentally different set of core values. A rational human, on the other hand, retains human values; that’s a big emphasis, in fact—that rational doesn’t mean Vulcan. I guess one could stray from common human values, but I don’t see how just an increase in rationality could do this—it’s just the tool that serves our desires and motivations, whatever they might be.
I think the only danger would be coming to a mistaken conclusion (like “exterminate all humans”) and then, because of a desire to be rational, sticking rigidly to it and thus inadvertantly causing damage as efficiently as possible. But one would hope aspiring rationalists also learn to be flexible and cautious enough that this would not happen.
A conflict between what rationality tells you is right and what you feel is right seems like a somewhat more common situation. (I would always take note of this, keeping in mind paragraph number two, because the feeling is there for a reason. That doesn’t mean it’s right, though.) This conflict arises, I think, when we take principles based on these core values—like “pleasure good”—and extrapolate them further than human intuition was ever required to go; dealing with very large numbers, for instance, or abstract situations that would normally be covered by a snap judgment.
Thus we reach conclusions that may seem odd at first, but we’re not just creatures of intuition and emotion—we also have the ability to think logically, and eventually change our minds and accept new things or views. So if you can explain your morality in a way that is acceptable to the rational human, then it isn’t really becoming less human at all.
Our “human intuition” is not always correct, anyway. (In fact, I personally would go so far as to say that any rational being with human experiences should arrive at a morality similar to utilitarianism, and thus becoming more rational just means one arrives at this conclusion more quickly, but that’s another debate.) You bring up a very interesting and relevant topic, though.
As for empathy—I don’t think becoming more rational means having less empathy for “irrational human experiences”! For one, what makes them irrational? There’s nothing inherently rational or irrational about tasting a delicious pastry!