If I consider satisfaction of my preferences to be a disaster, in what sense can I realistically call them my preferences? It feels like you’re more caught up on the difficulty of extrapolating these preferences outside of their standard operation, but that seems like a rather different issue.
I’ve thinking of a rather naive form of preference utilitarianism, of the sort “if the human agree to it or choose it, then it’s ok”. In particular, you can end up with some forms of depression where the human is miserable, but isn’t willing to change.
If I consider satisfaction of my preferences to be a disaster, in what sense can I realistically call them my preferences? It feels like you’re more caught up on the difficulty of extrapolating these preferences outside of their standard operation, but that seems like a rather different issue.
I’ve thinking of a rather naive form of preference utilitarianism, of the sort “if the human agree to it or choose it, then it’s ok”. In particular, you can end up with some forms of depression where the human is miserable, but isn’t willing to change.
I’ll clarify that in the post.