All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
I understand your frustration, since we don’t seem to be saying much to support our claims here. We’ve discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.
However, there’s a lot of material that’s already been said elsewhere, so I hope you’ll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.
Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The “Intuitions” Behind “Utilitarianism”. Searching LW for keywords like “specks” or “utilitarian” should bring up more recent posts as well, but these three sum up more or less what I’d say in response to your question.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
It’s especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.
That being said, it is certainly possible to map a subset of one’s preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One’s preferences are not necessarily opaque to reflection; they’re mostly just nonobvious.
What would be an example of a hidden preference? The post to which you linked didn’t explicitly mention that concept at all.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
I understand your frustration, since we don’t seem to be saying much to support our claims here. We’ve discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.
However, there’s a lot of material that’s already been said elsewhere, so I hope you’ll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.
Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The “Intuitions” Behind “Utilitarianism”. Searching LW for keywords like “specks” or “utilitarian” should bring up more recent posts as well, but these three sum up more or less what I’d say in response to your question.
(There’s a whole metaethics sequence later on (see the whole list of Eliezer’s posts from Overcoming Bias), but that’s less germane to your immediate question.)
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
It’s especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.
That being said, it is certainly possible to map a subset of one’s preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One’s preferences are not necessarily opaque to reflection; they’re mostly just nonobvious.