The problem with this logic is that my values are better than those of my ancestors. Of course I would say that, but it’s not just a matter of subjective judgment; I have better information on which to base my values. For example, my ancestors disapproved of lending money at interest, but if they could see how well loans work in the modern economy, I believe they’d change their minds.
It’s easy to see how concepts like MWI or cognitive computationalism affect one’s values when accepted. It’s likely bordering on certain that transhumans will have more insights of similar significance, so I hope that human values continue to change.
I suspect that both quoted authors are closer to that position than to endorsing or accepting random value drift.
The problem with this logic is that my values are better than those of my ancestors.
Your values are what they are. They talk about how good certain possible future-configurations are, compared to other possible future-configurations. Other concepts that happen to also be termed “values”, such as your ancestors’ values, don’t say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.
If you’d like for future people to be different in given respects from how people exist now, that is also a value judgment. For future people to feel different about their condition than you feel about their condition would make them disagree with your values (and dually).
Other concepts that happen to also be termed “values”, such as your ancestors’ values, don’t say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.
I’m having difficulty understanding the relevance of this sentence. It sounds like you think I’m treating “my ancestors’ values” as a term in my own set of values, instead of a separate set of values that overlaps with mine in some respects.
My ancestors tried to steer their future away from economic systems that included money loaned at interest. They were unsuccessful, and that turned out to be fortunate; loaning money turned out to be economically valuable. If they had known in advance that loaning money would work out in everyone’s best interest, they would have updated their values (future-configuration preferences).
Of course, you could argue that neither of us really cared about loaning at interest; what we really cared about was a higher-level goal like a healthy economy. It would be convenient if we could establish a restate our values in a well-organized hierarchy, with a node at the top that was invariant on available information. But even if that could be done, which I doubt, it would still leave a role for available information in deciding something as concrete as a preferred future-configuration.
Of course, you could argue that neither of us really cared about loaning at interest; what we really cared about was a higher-level goal like a healthy economy. It would be convenient if we could establish a restate our values in a well-organized hierarchy, with a node at the top that was invariant on available information.
That’s closer to the sense I wanted to convey with this word.
But even if that could be done, which I doubt, it would still leave a role for available information in deciding something as concrete as a preferred future-configuration.
Distinction is between a formal criterion of preference and computationally feasible algorithms for estimation of preference between specific plans. The concept relevant for this discussion is the former one.
I haven’t yet been convinced that my values are any better than the values of my ancestors by this argument.
Yes if I look at history people generally tend to move towards my own current values (with periods of detours). But this would be true if I looked at my travelled path after doing a random walk.
Sure there are things like knowledge changing proxy values due to knowledge (I would like my ancestors favour punishing witches if it turned out that they factually do use demonically gifted powers to hurt others), but there has also been just plain old value drift. There are plenty of things our ancestors would never approve of even if they had all the knowledge we had.
The problem with this logic is that my values are better than those of my ancestors. Of course I would say that, but it’s not just a matter of subjective judgment; I have better information on which to base my values. For example, my ancestors disapproved of lending money at interest, but if they could see how well loans work in the modern economy, I believe they’d change their minds.
It’s easy to see how concepts like MWI or cognitive computationalism affect one’s values when accepted. It’s likely bordering on certain that transhumans will have more insights of similar significance, so I hope that human values continue to change.
I suspect that both quoted authors are closer to that position than to endorsing or accepting random value drift.
Your values are what they are. They talk about how good certain possible future-configurations are, compared to other possible future-configurations. Other concepts that happen to also be termed “values”, such as your ancestors’ values, don’t say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.
If you’d like for future people to be different in given respects from how people exist now, that is also a value judgment. For future people to feel different about their condition than you feel about their condition would make them disagree with your values (and dually).
I’m having difficulty understanding the relevance of this sentence. It sounds like you think I’m treating “my ancestors’ values” as a term in my own set of values, instead of a separate set of values that overlaps with mine in some respects.
My ancestors tried to steer their future away from economic systems that included money loaned at interest. They were unsuccessful, and that turned out to be fortunate; loaning money turned out to be economically valuable. If they had known in advance that loaning money would work out in everyone’s best interest, they would have updated their values (future-configuration preferences).
Of course, you could argue that neither of us really cared about loaning at interest; what we really cared about was a higher-level goal like a healthy economy. It would be convenient if we could establish a restate our values in a well-organized hierarchy, with a node at the top that was invariant on available information. But even if that could be done, which I doubt, it would still leave a role for available information in deciding something as concrete as a preferred future-configuration.
That’s closer to the sense I wanted to convey with this word.
Distinction is between a formal criterion of preference and computationally feasible algorithms for estimation of preference between specific plans. The concept relevant for this discussion is the former one.
I haven’t yet been convinced that my values are any better than the values of my ancestors by this argument.
Yes if I look at history people generally tend to move towards my own current values (with periods of detours). But this would be true if I looked at my travelled path after doing a random walk.
Sure there are things like knowledge changing proxy values due to knowledge (I would like my ancestors favour punishing witches if it turned out that they factually do use demonically gifted powers to hurt others), but there has also been just plain old value drift. There are plenty of things our ancestors would never approve of even if they had all the knowledge we had.