Similarly, suppose you have two deontological values which trade off against each other. Before systematization, the question of “what’s the right way to handle cases where they conflict” is not really well-defined; you have no procedure for doing so. After systematization, you do. (And you also have answers to questions like “what counts as lying?” or “is X racist?”, which without systematization are often underdefined.) [...]
You can conserve your values (i.e. continue to care terminally about lower-level representations) but the price you pay is that they make less sense, and they’re underdefined in a lot of cases. [...] And that’s why the “mind itself wants to do this” does make sense, because it’s reasonable to assume that highly capable cognitive architectures will have ways of identifying aspects of their thinking that “don’t make sense” and correcting them.
I think we should be careful to distinguish explicit and implicit systematization. Some of what you are saying (e.g. getting answers to question like “what counts as lying”) sounds like you are talking about explicit, consciously done systematization; but some of what you are saying (e.g. minds identifying aspects of thinking that “don’t make sense” and correcting them) also sounds like it’d apply more generally to developing implicit decision-making procedures.
I could see the deontologist solving their problem either way—by developing some explicit procedure and reasoning for solving the conflict between their values, or just going by a gut feel for which value seems to make more sense to apply in that situation and the mind then incorporating this decision into its underlying definition of the two values.
I don’t know how exactly deontological rules work, but I’m guessing that you could solve a conflict between them by basically just putting in a special case for “in this situation, rule X wins over rule Y”—and if you view the rules as regions in state space where the region for rule X corresponds to the situations where rule X is applied, then adding data points about which rule is meant to cover which situation ends up modifying the rule itself. It would also be similar to the way that rules work in skill learning in general, in that experts find the rules getting increasingly fine-grained, implicit and full of exceptions. Here’s how Josh Waitzkin describes the development of chess expertise:
Let’s say that I spend fifteen years studying chess. [...] We will start with day one. The first thing I have to do is to internalize how the pieces move. I have to learn their values. I have to learn how to coordinate them with one another. [...]
Soon enough, the movements and values of the chess pieces are natural to me. I don’t have to think about them consciously, but see their potential simultaneously with the figurine itself. Chess pieces stop being hunks of wood or plastic, and begin to take on an energetic dimension. Where the piece currently sits on a chessboard pales in comparison to the countless vectors of potential flying off in the mind. I see how each piece affects those around it. Because the basic movements are natural to me, I can take in more information and have a broader perspective of the board. Now when I look at a chess position, I can see all the pieces at once. The network is coming together.
Next I have to learn the principles of coordinating the pieces. I learn how to place my arsenal most efficiently on the chessboard and I learn to read the road signs that determine how to maximize a given soldier’s effectiveness in a particular setting. These road signs are principles. Just as I initially had to think about each chess piece individually, now I have to plod through the principles in my brain to figure out which apply to the current position and how. Over time, that process becomes increasingly natural to me, until I eventually see the pieces and the appropriate principles in a blink. While an intermediate player will learn how a bishop’s strength in the middlegame depends on the central pawn structure, a slightly more advanced player will just flash his or her mind across the board and take in the bishop and the critical structural components. The structure and the bishop are one. Neither has any intrinsic value outside of its relation to the other, and they are chunked together in the mind.
This new integration of knowledge has a peculiar effect, because I begin to realize that the initial maxims of piece value are far from ironclad. The pieces gradually lose absolute identity. I learn that rooks and bishops work more efficiently together than rooks and knights, but queens and knights tend to have an edge over queens and bishops. Each piece’s power is purely relational, depending upon such variables as pawn structure and surrounding forces. So now when you look at a knight, you see its potential in the context of the bishop a few squares away. Over time each chess principle loses rigidity, and you get better and better at reading the subtle signs of qualitative relativity. Soon enough, learning becomes unlearning. The stronger chess player is often the one who is less attached to a dogmatic interpretation of the principles. This leads to a whole new layer of principles—those that consist of the exceptions to the initial principles. Of course the next step is for those counterintuitive signs to become internalized just as the initial movements of the pieces were. The network of my chess knowledge now involves principles, patterns, and chunks of information, accessed through a whole new set of navigational principles, patterns, and chunks of information, which are soon followed by another set of principles and chunks designed to assist in the interpretation of the last. Learning chess at this level becomes sitting with paradox, being at peace with and navigating the tension of competing truths, letting go of any notion of solidity.
“Sitting with paradox, being at peace with and navigating the tension of competing truths, letting go of any notion of solidity” also sounds to me like some of the models for higher stages of moral development, where one moves past the stage of trying to explicitly systematize morality and can treat entire systems of morality as things that all co-exist in one’s mind and are applicable in different situations. Which would make sense, if moral reasoning is a skill in the same sense that playing chess is a skill, and moral preferences are analogous to a chess expert’s preferences for which piece to play where.
Except that chess really does have an objectively correct value systemization, which is “win the game.” “Sitting with paradox” just means, don’t get too attached to partial systemizations. It reminds me of Max Stirner’s egoist philosophy, which emphasized that individuals should not get hung up on partial abstractions or “idées fixées” (honesty, pleasure, success, money, truth, etc.) except perhaps as cheap, heuristic proxies for one’s uber-systematized value of self-interest, but one should instead always keep in mind the overriding abstraction of self-interest and check in periodically as to whether one’s commitment to honesty, pleasure, success, money, truth, or any of these other “spooks” really are promoting one’s self-interest (perhaps yes, perhaps no).
Except that chess really does have an objectively correct value systemization, which is “win the game.”
Your phrasing sounds like you might be saying this as an objection to what I wrote, but I’m not sure how it would contradict my comment.
The same mechanisms can still apply even if the correct systematization is subjective in one case and objective in the second case. Ultimately what matters is that the cognitive system feels that one alternative is better than the other and takes that feeling as feedback for shaping future behavior, and I think that the mechanism which updates on feedback doesn’t really see whether the source of the feedback is something we’d call objective (win or loss at chess) or subjective (whether the resulting outcome was good in terms of the person’s pre-existing values).
“Sitting with paradox” just means, don’t get too attached to partial systemizations.
Yeah, I think that’s a reasonable description of what it means in the context of morality too.
I think we should be careful to distinguish explicit and implicit systematization. Some of what you are saying (e.g. getting answers to question like “what counts as lying”) sounds like you are talking about explicit, consciously done systematization; but some of what you are saying (e.g. minds identifying aspects of thinking that “don’t make sense” and correcting them) also sounds like it’d apply more generally to developing implicit decision-making procedures.
I could see the deontologist solving their problem either way—by developing some explicit procedure and reasoning for solving the conflict between their values, or just going by a gut feel for which value seems to make more sense to apply in that situation and the mind then incorporating this decision into its underlying definition of the two values.
I don’t know how exactly deontological rules work, but I’m guessing that you could solve a conflict between them by basically just putting in a special case for “in this situation, rule X wins over rule Y”—and if you view the rules as regions in state space where the region for rule X corresponds to the situations where rule X is applied, then adding data points about which rule is meant to cover which situation ends up modifying the rule itself. It would also be similar to the way that rules work in skill learning in general, in that experts find the rules getting increasingly fine-grained, implicit and full of exceptions. Here’s how Josh Waitzkin describes the development of chess expertise:
“Sitting with paradox, being at peace with and navigating the tension of competing truths, letting go of any notion of solidity” also sounds to me like some of the models for higher stages of moral development, where one moves past the stage of trying to explicitly systematize morality and can treat entire systems of morality as things that all co-exist in one’s mind and are applicable in different situations. Which would make sense, if moral reasoning is a skill in the same sense that playing chess is a skill, and moral preferences are analogous to a chess expert’s preferences for which piece to play where.
Except that chess really does have an objectively correct value systemization, which is “win the game.” “Sitting with paradox” just means, don’t get too attached to partial systemizations. It reminds me of Max Stirner’s egoist philosophy, which emphasized that individuals should not get hung up on partial abstractions or “idées fixées” (honesty, pleasure, success, money, truth, etc.) except perhaps as cheap, heuristic proxies for one’s uber-systematized value of self-interest, but one should instead always keep in mind the overriding abstraction of self-interest and check in periodically as to whether one’s commitment to honesty, pleasure, success, money, truth, or any of these other “spooks” really are promoting one’s self-interest (perhaps yes, perhaps no).
Your phrasing sounds like you might be saying this as an objection to what I wrote, but I’m not sure how it would contradict my comment.
The same mechanisms can still apply even if the correct systematization is subjective in one case and objective in the second case. Ultimately what matters is that the cognitive system feels that one alternative is better than the other and takes that feeling as feedback for shaping future behavior, and I think that the mechanism which updates on feedback doesn’t really see whether the source of the feedback is something we’d call objective (win or loss at chess) or subjective (whether the resulting outcome was good in terms of the person’s pre-existing values).
Yeah, I think that’s a reasonable description of what it means in the context of morality too.