If our morality is complex and directly tied to what’s human—if we’re seeking to avoid building paperclip maximizers—how do you judge and quantify the danger in training yourself to become more rational if it should drift from being more human?
My friend is a skeptical theist. She, for instance, scoffs mightily at Camping’s little dilemma/psychosis but then argues from a position of comfort that Rapture it’s a silly thing to predict because it’s clearly stated that no one will know the day. And then she gives me a confused look because the psychological dissonance is clear.
On one hand, my friend is in a prime position to take forward steps to self-examination and holding rational belief systems. On the other hand, she’s an opera singer whose passion and profession require her to be able to empathize with and explore highly irrational human experiences. Since rationality is the art of winning, nobody can deny that the option that lets you have your cake and eat it too is best, but how do you navigate such a narrows?
In another example, a recent comment thread suggested the dangers of embracing human tendencies: catharsis might lead to promoting further emotional intensity. At the same time, catharsis is a well appreciated human communication strategy with roots in Greek stage. If rational action pulls you away from humanity, away from our complex morality, then how do we judge it worth doing?
The most immediate resolution to this conundrum appears to me to be that human morality has no consistency constraint: we can want to be powerful and able to win while also want to retain our human tendencies which directly impinge on that goal. Is there a theory of metamorality which allows you to infer how such tradeoffs should be managed? Or is human morality, as a program, flawed with inconsistencies that lead to inescapable cognitive dissonance and dehumanization? If you interpret morality as a self-supporting strange loop, is it possible to have unresolvable, drifting interpretations based on how you focus you attentions?
Dual to the problem of resolving a way forward is the problem of the interpreter. If there is a goal to at least marginally increase the rationality of humanity, but in order to discover the means to do so you have to become less capable of empathizing with and communicating with humanity, who acts as an interpreter between the two divergent mindsets?
Remaining human
If our morality is complex and directly tied to what’s human—if we’re seeking to avoid building paperclip maximizers—how do you judge and quantify the danger in training yourself to become more rational if it should drift from being more human?
My friend is a skeptical theist. She, for instance, scoffs mightily at Camping’s little dilemma/psychosis but then argues from a position of comfort that Rapture it’s a silly thing to predict because it’s clearly stated that no one will know the day. And then she gives me a confused look because the psychological dissonance is clear.
On one hand, my friend is in a prime position to take forward steps to self-examination and holding rational belief systems. On the other hand, she’s an opera singer whose passion and profession require her to be able to empathize with and explore highly irrational human experiences. Since rationality is the art of winning, nobody can deny that the option that lets you have your cake and eat it too is best, but how do you navigate such a narrows?
In another example, a recent comment thread suggested the dangers of embracing human tendencies: catharsis might lead to promoting further emotional intensity. At the same time, catharsis is a well appreciated human communication strategy with roots in Greek stage. If rational action pulls you away from humanity, away from our complex morality, then how do we judge it worth doing?
The most immediate resolution to this conundrum appears to me to be that human morality has no consistency constraint: we can want to be powerful and able to win while also want to retain our human tendencies which directly impinge on that goal. Is there a theory of metamorality which allows you to infer how such tradeoffs should be managed? Or is human morality, as a program, flawed with inconsistencies that lead to inescapable cognitive dissonance and dehumanization? If you interpret morality as a self-supporting strange loop, is it possible to have unresolvable, drifting interpretations based on how you focus you attentions?
Dual to the problem of resolving a way forward is the problem of the interpreter. If there is a goal to at least marginally increase the rationality of humanity, but in order to discover the means to do so you have to become less capable of empathizing with and communicating with humanity, who acts as an interpreter between the two divergent mindsets?