(I don’t even know how to express coherently the idea that “values are getting better”.)
Do you grant that I can have reflective preferences about the way my values should change in the future? That is, that I would not want my values to change in certain ways (e.g. by the intervention of an antagonist) but would want my values to change in other ways (e.g. if I think for a long time and decide that I value something different).
If so it seems clear that I can have preferences over ways my values could have changed in past, and can therefore say that some processes of change are good and some are bad. (To get the actual statement you made you would need something like CEV, but you don’t need the statement you made to define or justify CEV).
Do you grant that I can have reflective preferences about the way my values should change in the future?
No. If you have a preference about how your values should change, it means you have conflicting values. If you think that you want your values to change, this probably means that the conscious you places more value on one preference, and the unconscious you places more value on another. This is what is happening when people say they wish they could eat less. Their minds want to eat less, and their bodies want to eat more.
You and paulfchristiano seem to be using the word “way” in two different ways.
Your post makes sense if I replace “the way” and “how” with “the direction in which”. His makes sense if I replace “the way” by “the means with which”.
To apply your example: I can’t consistently prefer a value change like “I should eat more fish”, because if I wholeheartedly preferred that then I’d already be eating more fish. I can prefer a value change like “I should eat more of whatever foods are recommended by good nutritional studies that I haven’t seen yet”, because although I cannot identify any specific failing of my current values I can identify that there are specific ways in which they might be improved in the future by unexpected new information.
This possibility of improvement applies only to instrumental values and self-inconsistent terminal values, but that’s still pretty useful. How many people currently have and can unambiguously define self-consistent terminal values?
Maybe you can have preferences about your future values, but most moral change is very slow. Do societies have coherent preferences about their future values? Before you say yes, consider the massive moral differences between us and some ancient ancestor society. Would Socrates really have predicted universal suffrage?
Do you grant that I can have reflective preferences about the way my values should change in the future? That is, that I would not want my values to change in certain ways (e.g. by the intervention of an antagonist) but would want my values to change in other ways (e.g. if I think for a long time and decide that I value something different).
If so it seems clear that I can have preferences over ways my values could have changed in past, and can therefore say that some processes of change are good and some are bad. (To get the actual statement you made you would need something like CEV, but you don’t need the statement you made to define or justify CEV).
No. If you have a preference about how your values should change, it means you have conflicting values. If you think that you want your values to change, this probably means that the conscious you places more value on one preference, and the unconscious you places more value on another. This is what is happening when people say they wish they could eat less. Their minds want to eat less, and their bodies want to eat more.
You and paulfchristiano seem to be using the word “way” in two different ways.
Your post makes sense if I replace “the way” and “how” with “the direction in which”. His makes sense if I replace “the way” by “the means with which”.
To apply your example: I can’t consistently prefer a value change like “I should eat more fish”, because if I wholeheartedly preferred that then I’d already be eating more fish. I can prefer a value change like “I should eat more of whatever foods are recommended by good nutritional studies that I haven’t seen yet”, because although I cannot identify any specific failing of my current values I can identify that there are specific ways in which they might be improved in the future by unexpected new information.
This possibility of improvement applies only to instrumental values and self-inconsistent terminal values, but that’s still pretty useful. How many people currently have and can unambiguously define self-consistent terminal values?
Maybe you can have preferences about your future values, but most moral change is very slow. Do societies have coherent preferences about their future values?
Before you say yes, consider the massive moral differences between us and some ancient ancestor society. Would Socrates really have predicted universal suffrage?
Plato imagined women voting.
Francis Godwin in the 1620s imagined traveling to the moon. Imagining progress is not the same as implementing it.