Terminal values can be seen as value axioms in that they’re the root nodes in a graph of values, just as logical axioms can be seen as the root nodes of a graph of theorems.
They are unlike logical axioms in that we’re using them to derive the utility consequent on certain choices (given consequentialist assumptions; it’s possible to have analogs of terminal values in non-consequentialist ethical systems, but it’s somewhat more complicated) rather than the boolean validity of a theorem. Different terminal values may have different consequential effects, and they may conflict without contradiction. This does not make them any less terminal.
Clippy has only one terminal value which doesn’t take into account the integrity of anything that isn’t a paperclip, which is why it’s perfectly happy to convert the mass of galaxies into said paperclips. Humans’ values are more complicated, insofar as they’re well modeled by this concept, and involve things like “life” and “natural beauty” (I take no position on whether these are terminal or instrumental values w.r.t. humans), which is why they generally aren’t.
Locally, human values usually are modelled by TGs.
You can define several ethical models in terms of their preferred terminal value or set of terminal values; for negative utilitarianism, for example, it’s minimization of suffering. I see human value structure as an unsolved problem, though, for reasons I don’t want to spend a lot of time getting into this far down in the comment tree.
Or did you mean “locally” as in “on Less Wrong”? I believe the term’s often misused here, but not for the reasons you seem to.
What’s conflict without contradiction?
Because of the structure of Boolean logic, logical axioms that come into conflict generate a contradiction and therefore imply that the axiomatic system they’re embedded in is invalid. Consequentialist value systems don’t have that feature, and the terminal values they flow from are therefore allowed to conflict in certain situations, if more than one exists. Naturally, if two conflicting terminal values both have well-behaved effects over exactly the same set of situations, they might as well be reduced to one, but that isn’t always going to be the case.
If acquiring bacon was your ONLY terminal goal, then yes, it would be irrational not to do absolutely everything you could to maximize your expected bacon. However, most people have more than just one terminal goal. You seem to be using ‘terminal goal’ to mean ‘a goal more important than any other’. Trouble is, no one else is using it this way.
EDIT: Actually, it seems to me that you’re using ‘terminal goal’ to mean something analogous to a terminal node in a tree search (if you can reach that node, you’re done). No one else is using it that way either.
Feel free to offer the correc definition. But note that you came define it as overridable, since non terminal goals are already defined that way.
There is no evidence that people have one or more terminal goals . At least you need to offer a definition such that multiple TGs don’t collide, and are distinguishable from non TGs.
So you have a thing which is like an axiom in that it can’t be explained in more basic terms...
..but is unlike an axiom in that you can ignore its implications where they don’t suit.. you don’t have to savage galaxies to obtain bacon...
..unless you’re an AI and it’s paperclips instead of bacon, because in that case these axiom like things actually are axiom like.
Terminal values can be seen as value axioms in that they’re the root nodes in a graph of values, just as logical axioms can be seen as the root nodes of a graph of theorems.
They are unlike logical axioms in that we’re using them to derive the utility consequent on certain choices (given consequentialist assumptions; it’s possible to have analogs of terminal values in non-consequentialist ethical systems, but it’s somewhat more complicated) rather than the boolean validity of a theorem. Different terminal values may have different consequential effects, and they may conflict without contradiction. This does not make them any less terminal.
Clippy has only one terminal value which doesn’t take into account the integrity of anything that isn’t a paperclip, which is why it’s perfectly happy to convert the mass of galaxies into said paperclips. Humans’ values are more complicated, insofar as they’re well modeled by this concept, and involve things like “life” and “natural beauty” (I take no position on whether these are terminal or instrumental values w.r.t. humans), which is why they generally aren’t.
Locally, human values usually are modelled by TGs.
What’s conflict without contradiction?
You can define several ethical models in terms of their preferred terminal value or set of terminal values; for negative utilitarianism, for example, it’s minimization of suffering. I see human value structure as an unsolved problem, though, for reasons I don’t want to spend a lot of time getting into this far down in the comment tree.
Or did you mean “locally” as in “on Less Wrong”? I believe the term’s often misused here, but not for the reasons you seem to.
Because of the structure of Boolean logic, logical axioms that come into conflict generate a contradiction and therefore imply that the axiomatic system they’re embedded in is invalid. Consequentialist value systems don’t have that feature, and the terminal values they flow from are therefore allowed to conflict in certain situations, if more than one exists. Naturally, if two conflicting terminal values both have well-behaved effects over exactly the same set of situations, they might as well be reduced to one, but that isn’t always going to be the case.
If acquiring bacon was your ONLY terminal goal, then yes, it would be irrational not to do absolutely everything you could to maximize your expected bacon. However, most people have more than just one terminal goal. You seem to be using ‘terminal goal’ to mean ‘a goal more important than any other’. Trouble is, no one else is using it this way.
EDIT: Actually, it seems to me that you’re using ‘terminal goal’ to mean something analogous to a terminal node in a tree search (if you can reach that node, you’re done). No one else is using it that way either.
Feel free to offer the correc definition. But note that you came define it as overridable, since non terminal goals are already defined that way.
There is no evidence that people have one or more terminal goals . At least you need to offer a definition such that multiple TGs don’t collide, and are distinguishable from non TGs.
Where are you getting these requirements from?