In the linked post, I discuss revealed preference analysis and argue that claims about someone’s actual values should predict not only what they do in their current situation, but what they would do in substantially different situations, given e.g. different information and different expectations for how others behave.
(LessWrongers may ask “what does this add over CEV?” CEV is one possible hypothetical where people are more informed, but is impossible to compute and also not predictive of what people do in actual situations of having somewhat more information while still being under cognitive limitations. Thus, CEV “analysis”, unlike conditional/counterfactual analysis, ends up being largely spurious.)
In other words, “real preferences” are a functional part of a larger model of humans that supports counterfactual reasoning, and if you want to infer the preferences, you should also make sure that your larger model is a good model of humans. (Where “good” doesn’t just mean highly predictive, it includes some other criteria that involve making talking a bout preferences a good idea, and maybe not deviating too far from our intuitive model).
This was a useful concept to have, thanks.
Meta note: text on your website is really hard to read (due to the thin font—300 weight—and the very light text color—#666).
Do you mind cross-posting the full text of the post to LW?