I want a similarly clear-and-understood generalization of the “reasoning vs rationalizing” distinction that applies also to processes to spread across multiple heads. I don’t have that yet. I would much appreciate help toward this.
I’m not entirely happy with any of the terminology suggested in that post; something like “seeing your preferences realized” vs. “seeing the world clearly” would in my mind be better than either “self vs. no-self” or “design specifications vs. engineering constraints”.
In particular, Vaniver’s post makes the interesting contribution of pointing out that while “reasoning vs. rationalization” suggests that the two would be opposed, seeing the world clearly vs. seeing your preferences realized can be opposed, mutually supporting, or orthogonal. You can come to see your preferences more realized by deluding yourself, but you can also deepen both, seeing your preferences realized more because you are seeing the world more clearly.
In that ontology, instead of something being either reality-masking or reality-revealing, it can
A. Cause you to see your preferences more realized and the world more clearly
B. Cause you to see your preferences more realized but the world less clearly
C. Cause you to see your preferences less realized but the world more clearly
D. Cause you to see your preferences less realized and the world less clearly
But the problem is that a system facing a choice between several options has no general way to tell whether some option it could take is actually an instance of A, B, C or D or if there is a local maximum that means that choosing one possiblity increases one variable a little, but another option would have increased it even more in the long term.
E.g. learning about the Singularity makes you see the world more clearly, but it also makes you see that fewer of your preferences might get realized than you had thought. But then the need to stay alive and navigate the Singularly successfully, pushes you into D, where you are so focused on trying to invest all your energy into that mission that you fail to see how this prevents you from actually realizing any of your preferences… but since you see yourself as being very focused on the task and ignoring “unimportant” things, you think that you are doing A while you are actually doing D.
I feel like Vaniver’s interpretation of self vs. no-self is pointing at a similar thing; would you agree?
I’m not entirely happy with any of the terminology suggested in that post; something like “seeing your preferences realized” vs. “seeing the world clearly” would in my mind be better than either “self vs. no-self” or “design specifications vs. engineering constraints”.
In particular, Vaniver’s post makes the interesting contribution of pointing out that while “reasoning vs. rationalization” suggests that the two would be opposed, seeing the world clearly vs. seeing your preferences realized can be opposed, mutually supporting, or orthogonal. You can come to see your preferences more realized by deluding yourself, but you can also deepen both, seeing your preferences realized more because you are seeing the world more clearly.
In that ontology, instead of something being either reality-masking or reality-revealing, it can
A. Cause you to see your preferences more realized and the world more clearly
B. Cause you to see your preferences more realized but the world less clearly
C. Cause you to see your preferences less realized but the world more clearly
D. Cause you to see your preferences less realized and the world less clearly
But the problem is that a system facing a choice between several options has no general way to tell whether some option it could take is actually an instance of A, B, C or D or if there is a local maximum that means that choosing one possiblity increases one variable a little, but another option would have increased it even more in the long term.
E.g. learning about the Singularity makes you see the world more clearly, but it also makes you see that fewer of your preferences might get realized than you had thought. But then the need to stay alive and navigate the Singularly successfully, pushes you into D, where you are so focused on trying to invest all your energy into that mission that you fail to see how this prevents you from actually realizing any of your preferences… but since you see yourself as being very focused on the task and ignoring “unimportant” things, you think that you are doing A while you are actually doing D.