Terminal values can be seen as value axioms in that they’re the root nodes in a graph of values, just as logical axioms can be seen as the root nodes of a graph of theorems.
They are unlike logical axioms in that we’re using them to derive the utility consequent on certain choices (given consequentialist assumptions; it’s possible to have analogs of terminal values in non-consequentialist ethical systems, but it’s somewhat more complicated) rather than the boolean validity of a theorem. Different terminal values may have different consequential effects, and they may conflict without contradiction. This does not make them any less terminal.
Clippy has only one terminal value which doesn’t take into account the integrity of anything that isn’t a paperclip, which is why it’s perfectly happy to convert the mass of galaxies into said paperclips. Humans’ values are more complicated, insofar as they’re well modeled by this concept, and involve things like “life” and “natural beauty” (I take no position on whether these are terminal or instrumental values w.r.t. humans), which is why they generally aren’t.
Locally, human values usually are modelled by TGs.
You can define several ethical models in terms of their preferred terminal value or set of terminal values; for negative utilitarianism, for example, it’s minimization of suffering. I see human value structure as an unsolved problem, though, for reasons I don’t want to spend a lot of time getting into this far down in the comment tree.
Or did you mean “locally” as in “on Less Wrong”? I believe the term’s often misused here, but not for the reasons you seem to.
What’s conflict without contradiction?
Because of the structure of Boolean logic, logical axioms that come into conflict generate a contradiction and therefore imply that the axiomatic system they’re embedded in is invalid. Consequentialist value systems don’t have that feature, and the terminal values they flow from are therefore allowed to conflict in certain situations, if more than one exists. Naturally, if two conflicting terminal values both have well-behaved effects over exactly the same set of situations, they might as well be reduced to one, but that isn’t always going to be the case.
Terminal values can be seen as value axioms in that they’re the root nodes in a graph of values, just as logical axioms can be seen as the root nodes of a graph of theorems.
They are unlike logical axioms in that we’re using them to derive the utility consequent on certain choices (given consequentialist assumptions; it’s possible to have analogs of terminal values in non-consequentialist ethical systems, but it’s somewhat more complicated) rather than the boolean validity of a theorem. Different terminal values may have different consequential effects, and they may conflict without contradiction. This does not make them any less terminal.
Clippy has only one terminal value which doesn’t take into account the integrity of anything that isn’t a paperclip, which is why it’s perfectly happy to convert the mass of galaxies into said paperclips. Humans’ values are more complicated, insofar as they’re well modeled by this concept, and involve things like “life” and “natural beauty” (I take no position on whether these are terminal or instrumental values w.r.t. humans), which is why they generally aren’t.
Locally, human values usually are modelled by TGs.
What’s conflict without contradiction?
You can define several ethical models in terms of their preferred terminal value or set of terminal values; for negative utilitarianism, for example, it’s minimization of suffering. I see human value structure as an unsolved problem, though, for reasons I don’t want to spend a lot of time getting into this far down in the comment tree.
Or did you mean “locally” as in “on Less Wrong”? I believe the term’s often misused here, but not for the reasons you seem to.
Because of the structure of Boolean logic, logical axioms that come into conflict generate a contradiction and therefore imply that the axiomatic system they’re embedded in is invalid. Consequentialist value systems don’t have that feature, and the terminal values they flow from are therefore allowed to conflict in certain situations, if more than one exists. Naturally, if two conflicting terminal values both have well-behaved effects over exactly the same set of situations, they might as well be reduced to one, but that isn’t always going to be the case.