Another concern could be that “there is almost never a stable core of an individual human’s values”, i.e., that “even going forward from today, the values of Lukas or Rohin or Wei are going to be heavily underdetermined”. Is that the concern?
Yeah. Also I suspect some people are worried about taking current-you as a starting point—that seems somewhat arbitrary. But if you’re fine with that, then the major concern is that values are still underdetermined going forward.
The fact that I have a hard time understanding the framework behind your statement is probably because I’m thinking in terms of a different part of my brain when I talk about “my values”. I identify very much with my reflective life goals to a point that seems unusual.
I interpreted Wei’s comment as saying that even your reflective life goals would be underdetermined—presumably even now if you hear convincing moral argument A but not B, then you’d have different reflective life goals than if you hear B but not A. This seems broadly correct to me.
I interpreted Wei’s comment as saying that even your reflective life goals would be underdetermined—presumably even now if you hear convincing moral argument A but not B, then you’d have different reflective life goals than if you hear B but not A.
Okay yeah, that also seems broadly correct to me.
I am hoping though that, as long as I’m not subjected to optimization pressures from outside that weren’t crafted to be helpful, it’s very rare that something I’d currently consider very important can end up either staying important or becoming completely unimportant merely based on order of new arguments encountered. And similarly I’m hoping that my value endpoints would still cluster decisively around the things I currently consider most important, – though that’s where it becomes tricky to trade off goal preservation versus openness for philosophical progress.
Yeah. Also I suspect some people are worried about taking current-you as a starting point—that seems somewhat arbitrary. But if you’re fine with that, then the major concern is that values are still underdetermined going forward.
I interpreted Wei’s comment as saying that even your reflective life goals would be underdetermined—presumably even now if you hear convincing moral argument A but not B, then you’d have different reflective life goals than if you hear B but not A. This seems broadly correct to me.
Okay yeah, that also seems broadly correct to me.
I am hoping though that, as long as I’m not subjected to optimization pressures from outside that weren’t crafted to be helpful, it’s very rare that something I’d currently consider very important can end up either staying important or becoming completely unimportant merely based on order of new arguments encountered. And similarly I’m hoping that my value endpoints would still cluster decisively around the things I currently consider most important, – though that’s where it becomes tricky to trade off goal preservation versus openness for philosophical progress.