Do you mean, do you place a value on your future values? I don’t think you can do anything but place negative value on a change in your values. What’s an example of a rationality model that does what you’re asking?
This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.
I don’t think you can do anything but place negative value on a change in your values.
At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I’m not sure it’s reasonable for anyone to be so certain.*
I would like generally like to value more things than I currently do. Provided they aren’t harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.
One of the things I want from my values is internal consistency. I’m pretty sure my current values are not internally consistent in ways I haven’t yet realized. I place positive value on changing to more consistent values.
* Unless values are supposed to be exhaustive and correct merely because you hold them—in which case, why should you care if they change? They’ll still be exhaustive and correct.
I don’t think you can do anything but place negative value on a change in your values.
My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I’m certain my current self would approve, given the relevant new information.
Do you mean, do you place a value on your future values? I don’t think you can do anything but place negative value on a change in your values. What’s an example of a rationality model that does what you’re asking?
This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.
At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I’m not sure it’s reasonable for anyone to be so certain.*
I would like generally like to value more things than I currently do. Provided they aren’t harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.
One of the things I want from my values is internal consistency. I’m pretty sure my current values are not internally consistent in ways I haven’t yet realized. I place positive value on changing to more consistent values.
* Unless values are supposed to be exhaustive and correct merely because you hold them—in which case, why should you care if they change? They’ll still be exhaustive and correct.
Jim Moor makes a similar case in Should We Let Computers Get Under Our Skin—I dispute it in a paper (abstract here)
The gist is, if we have self-improvement as a value, then yes, changing our values can be a positive thing even considered ahead of time.
My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I’m certain my current self would approve, given the relevant new information.