When and why do people change their terminal values? Do the concepts of “moral error” and “moral progress” have referents? Why would anyone want to change what they want?
Suppose I currently want pie, but there is very little pie and lots of cake. Wouldn’t I have more of what-I-want if I could change what I want from pie to cake? Sure, that doesn’t get me more pie, but it increases the valuation-of-utility-function.
Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me more pie without changing how much pie everyone else gets? Then I want to change what-I-want to wanting-pie-for-everyone because that increases the valuations of both utility functions.
Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me pie at the expense of everyone else? Now I have no stable solution. I want to change what I want, but I don’t want to have changed what I want. My head is about to melt because I am very confused. “Rational agents should WIN” doesn’t seem to help when Omega’s behaviour depends on my definition of WIN.
Suppose I currently want pie, but there is very little pie and lots of cake. Wouldn’t I have more of what-I-want if I could change what I want from pie to cake? Sure, that doesn’t get me more pie, but it increases the valuation-of-utility-function.
Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me more pie without changing how much pie everyone else gets? Then I want to change what-I-want to wanting-pie-for-everyone because that increases the valuations of both utility functions.
Suppose I currently want pie for myself, but wanting pie for everyone will make Omega give me pie at the expense of everyone else? Now I have no stable solution. I want to change what I want, but I don’t want to have changed what I want. My head is about to melt because I am very confused. “Rational agents should WIN” doesn’t seem to help when Omega’s behaviour depends on my definition of WIN.