Endorsement on reflection is not straightforward, even states of knowledge or representations of values or ways of interpreting them can fail to be endorsed. It’s not good from my perspective for someone else to lose themselves and start acting in my interests. But it is good for them to find themselves if they are confused about what they should endorse on reflection.
From 0-my perspective, it’s good for 1-me to believe updatelessness is rational, even if from 1-my perspective it isn’t.
Values can say things about how agents think, about the reasons behind outcomes, not just the outcomes themselves. An object level moral point that gestures at the issue is to say that it’s not actually good when a person gets confused or manipulated and starts working towards an outcome that I prefer, that is I don’t prefer an outcome when it’s bundled with a world that produced it in this way, even if I would prefer the outcome when considered on its own. So I disagree with the claim that, assuming “from 1-my perspective it’s not good to do X”, then it’s still “from 0-my perspective it’s good for 1-me to believe that they should do X”.
A metaethical point on interpretation of states of knowledge or values not being straightforward is about the nature of possible confusion about what an agent might value. There is a setting where decision theory is sorted out, and values are specified explicitly, so that the notion of them being confused is not under consideration. But if we do entertain the possibility of confusion, that the design isn’t yet settled, or that there is no reflective stability, then the thing that’s currently written down as “values” and determines immediate actions has little claim to be actual values.
Endorsement on reflection is not straightforward, even states of knowledge or representations of values or ways of interpreting them can fail to be endorsed. It’s not good from my perspective for someone else to lose themselves and start acting in my interests. But it is good for them to find themselves if they are confused about what they should endorse on reflection.
I’m afraid I don’t understand your point — could you please rephrase?
Values can say things about how agents think, about the reasons behind outcomes, not just the outcomes themselves. An object level moral point that gestures at the issue is to say that it’s not actually good when a person gets confused or manipulated and starts working towards an outcome that I prefer, that is I don’t prefer an outcome when it’s bundled with a world that produced it in this way, even if I would prefer the outcome when considered on its own. So I disagree with the claim that, assuming “from 1-my perspective it’s not good to do X”, then it’s still “from 0-my perspective it’s good for 1-me to believe that they should do X”.
A metaethical point on interpretation of states of knowledge or values not being straightforward is about the nature of possible confusion about what an agent might value. There is a setting where decision theory is sorted out, and values are specified explicitly, so that the notion of them being confused is not under consideration. But if we do entertain the possibility of confusion, that the design isn’t yet settled, or that there is no reflective stability, then the thing that’s currently written down as “values” and determines immediate actions has little claim to be actual values.