Similarly, suppose you have two deontological values which trade off against each other. Before systematization, the question of “what’s the right way to handle cases where they conflict” is not really well-defined; you have no procedure for doing so.
Why is this a problem, that calls out to be fixed (hence leading to systematization)? Why not just stick with the default of “go with whichever value/preference/intuition that feels stronger in the moment”? People do that unthinkingly all the time, right? (I have my own thoughts on this, but curious if you agree with me or what your own thinking is.)
And that’s why the “mind itself wants to do this” does make sense, because it’s reasonable to assume that highly capable cognitive architectures will have ways of identifying aspects of their thinking that “don’t make sense” and correcting them.
Why is this a problem, that calls out to be fixed (hence leading to systematization)? Why not just stick with the default of “go with whichever value/preference/intuition that feels stronger in the moment”? People do that unthinkingly all the time, right? (I have my own thoughts on this, but curious if you agree with me or what your own thinking is.)
How would you cash out “don’t make sense” here?