This reminds me that I have an old post asking Why Do We Engage in Moral Simplification? (What I called “moral simplification” seems very similar to what you call “value systematization”.) I guess my post didn’t really fully answer this question, and you don’t seem to talk much about the “why” either.
Here are some ideas after thinking about it for a while. (Morality is Scary is useful background here, if you haven’t read it already.)
Wanting to use explicit reasoning with our values (e.g., to make decisions), which requires making our values explicit, i.e., defining them symbolically, which necessitates simplification given limitations of human symbolic reasoning.
Moral philosophy as a status game, where moral philosophers are implicitly scored on the moral theories they come up with by simplicity and by how many human moral intuitions they are consistent with.
Everyday signaling games, where people (in part) compete to show that they have community-approved or locally popular values. Making values legible and not too complex facilitates playing these games.
Instinctively transferring our intuitions/preferences for simplicity from “belief systematization” where they work really well, into a different domain (values) where they may or may not still make sense.
(Not sure how any of this applies to AI. Will have to think more about that.)
This reminds me that I have an old post asking Why Do We Engage in Moral Simplification? (What I called “moral simplification” seems very similar to what you call “value systematization”.) I guess my post didn’t really fully answer this question, and you don’t seem to talk much about the “why” either.
Here are some ideas after thinking about it for a while. (Morality is Scary is useful background here, if you haven’t read it already.)
Wanting to use explicit reasoning with our values (e.g., to make decisions), which requires making our values explicit, i.e., defining them symbolically, which necessitates simplification given limitations of human symbolic reasoning.
Moral philosophy as a status game, where moral philosophers are implicitly scored on the moral theories they come up with by simplicity and by how many human moral intuitions they are consistent with.
Everyday signaling games, where people (in part) compete to show that they have community-approved or locally popular values. Making values legible and not too complex facilitates playing these games.
Instinctively transferring our intuitions/preferences for simplicity from “belief systematization” where they work really well, into a different domain (values) where they may or may not still make sense.
(Not sure how any of this applies to AI. Will have to think more about that.)