If there is something that you care about more than your values, they are not really your values.
You seem to rely on a hidden assumption here: that I am equally confident in all my values.
I don’t think my values are consistent. Having more powerful deductive reasoning, and constant access to extreme corner cases would obviously change my value system. I also anticipate that my values would not be changed equally. Some of them would survive the encounter with extreme corner cases, some would not. Right now I don’t have to constantly deal with perfect clones and merging minds, so I am fine with my values as they are. But even now, I have a quite good intuition about which of them would not survive the future shock. That’s why I can talk without contradiction about accepting to lose those.
In CEV jargon: my expectation is that the extrapolation of my value system might not be recognizable to me as my value system. Wei_Dai voiced some related concerns with CEV here. It is worth looking at the first link in his comment.
Oh, I see. I appear to have initially missed the phrase `much of my values’.
I am wary of referring to my current inconsistent values rather than their reflective equilibrium as `my values’ because of the principle of explosion, but I am unsure of how to resolve this into my current self even having values.
It seems our positions can be summed up like this: You are wary of referring to your current values rather than their reflective equilibrium as ‘your values’, because your current values are inconsistent. I am wary of referring to the reflective equilibrium rather than my current values as ‘my values’, because I expect the transition to reflective equilibrium to be a very aggressive operation. (One could say that I embrace my ignorance.)
My concern is that the reflective equilibrium is far from my current position in the dynamical system of values. Meanwhile, Marcello and Wei Dai are concerned that the dynamical system is chaotic and has multiple reflective equilibria.
I don’t worry about the aggressiveness of the transition because, if my current values are inconsistent, they can be made to say that this transition is both good and bad. I share the concern about multiple reflective equilibrium. What does it mean to judge something as an irrational cishuman if two reflective equilibria would disagree on what is desirable?
I like to think of my “true values” as (initially) unknown, and my moral intuitions as evidence of, and approximations to, those true values. I can then work on improving the error margins, confidence intervals, and so forth.
So do I, but I worry that they are not uniquely defined by the evidence. I may eventually be moved to unique values by irrational arguments, but if those values are different from my current true values than I will have lost something and if I don’t have any true values than my search for values will have been pointless, though my future self will be okay with that.
You seem to rely on a hidden assumption here: that I am equally confident in all my values.
I don’t think my values are consistent. Having more powerful deductive reasoning, and constant access to extreme corner cases would obviously change my value system. I also anticipate that my values would not be changed equally. Some of them would survive the encounter with extreme corner cases, some would not. Right now I don’t have to constantly deal with perfect clones and merging minds, so I am fine with my values as they are. But even now, I have a quite good intuition about which of them would not survive the future shock. That’s why I can talk without contradiction about accepting to lose those.
In CEV jargon: my expectation is that the extrapolation of my value system might not be recognizable to me as my value system. Wei_Dai voiced some related concerns with CEV here. It is worth looking at the first link in his comment.
Oh, I see. I appear to have initially missed the phrase `much of my values’.
I am wary of referring to my current inconsistent values rather than their reflective equilibrium as `my values’ because of the principle of explosion, but I am unsure of how to resolve this into my current self even having values.
It seems our positions can be summed up like this: You are wary of referring to your current values rather than their reflective equilibrium as ‘your values’, because your current values are inconsistent. I am wary of referring to the reflective equilibrium rather than my current values as ‘my values’, because I expect the transition to reflective equilibrium to be a very aggressive operation. (One could say that I embrace my ignorance.)
My concern is that the reflective equilibrium is far from my current position in the dynamical system of values. Meanwhile, Marcello and Wei Dai are concerned that the dynamical system is chaotic and has multiple reflective equilibria.
I don’t worry about the aggressiveness of the transition because, if my current values are inconsistent, they can be made to say that this transition is both good and bad. I share the concern about multiple reflective equilibrium. What does it mean to judge something as an irrational cishuman if two reflective equilibria would disagree on what is desirable?
Upvoted purely for the tasty, tasty understatement here.
I should get that put on a button.
I like to think of my “true values” as (initially) unknown, and my moral intuitions as evidence of, and approximations to, those true values. I can then work on improving the error margins, confidence intervals, and so forth.
So do I, but I worry that they are not uniquely defined by the evidence. I may eventually be moved to unique values by irrational arguments, but if those values are different from my current true values than I will have lost something and if I don’t have any true values than my search for values will have been pointless, though my future self will be okay with that.