Furthermore, human values are over the “true” values of the latents, not our estimates—e.g. I want other people to actually be happy, not just to look-to-me like they’re happy.
I’m not sure that I’m convinced of this. I think when we say we value reality over our perception it’s because we have no faith in our perception to stay optimistically detached from reality. If I think about how I want my friends to be happy, not just appear happy to me, it’s because of a built-in assumption that if they appear happy to me but are actually depressed, the illusion will inevitably break. So in this sense I care not just about my estimate of a latent variable, but what my future retroactive estimates will be. I’d rather my friend actually be happy than be perfectly faking it for the same reason I save money and eat healthy—I care about future me.
What about this scenario: my friend is unhappy for a year while I think they’re perfectly happy, then at the end of the year they are actually happy but they reveal to me they’ve been depressed for the last year. Why is future me upset in this scenario, why does current me want to avoid this? Well because latent variables aren’t time-specific, I care about the value of latent variables in the future and the past, albeit less so. To summarize: I care about my own happiness across time and future me cares about my friend’s happiness across time, so I end up caring about the true value of the latent variable (my friend’s happiness). But this is an instrumental value, I care about the true value because it affects my estimates, which I care about intrinsically.
I’m not sure that I’m convinced of this. I think when we say we value reality over our perception it’s because we have no faith in our perception to stay optimistically detached from reality. If I think about how I want my friends to be happy, not just appear happy to me, it’s because of a built-in assumption that if they appear happy to me but are actually depressed, the illusion will inevitably break. So in this sense I care not just about my estimate of a latent variable, but what my future retroactive estimates will be. I’d rather my friend actually be happy than be perfectly faking it for the same reason I save money and eat healthy—I care about future me.
What about this scenario: my friend is unhappy for a year while I think they’re perfectly happy, then at the end of the year they are actually happy but they reveal to me they’ve been depressed for the last year. Why is future me upset in this scenario, why does current me want to avoid this? Well because latent variables aren’t time-specific, I care about the value of latent variables in the future and the past, albeit less so. To summarize: I care about my own happiness across time and future me cares about my friend’s happiness across time, so I end up caring about the true value of the latent variable (my friend’s happiness). But this is an instrumental value, I care about the true value because it affects my estimates, which I care about intrinsically.