Not only is “actual preferences” ill-defined, but so is “accurately represent.” So let me try and operationalize this a bit.
We have someone with a set of preferences that turn out to be mutually exclusive in the world they live in. We can in principle create a procedure for sorting their preferences into categories such that each preference falls into at least one category and all the preferences in a category can (at least in principle) be realized in that world at the same time. So suppose we’ve done this, and it turns out they have two categories A and B, where A includes those preferences Cato describes as “a fit of melancholy.”
I would say that their “actual” preferences = (A + B). It’s not realizable in the world, but it’s nevertheless their preference. So your question can be restated: does A or B more accurately represent (A + B)?
There doesn’t seem to be any nonarbitrary way to measure the extent of A, B, and (A+B) to determine this directly. I mean, what would you measure? The amount of brain matter devoted to representing all three? The number of lines of code required to represent them in some suitably powerful language?
One common approach is to look at their revealed preferences as demonstrated by the choices they make. Given an A-satisfying and a B-satisfying choice that are otherwise equivalent (and constructing such an exercise is left as an exercise to the class), which do they choose? This is tricky in this case, since the whole premise here is that their revealed preferences are inconsistent over time, but you could in principle measure their revealed preferences at multiple different times and weight the results accordingly (assuming for simplicity that all preference-moments are identical in weight).
When you were done doing all of that, you’d know whether A > B, B>A, or A=B.
It’s not in the least clear to me what good knowing that would do you. I suspect that this sort of analysis is not actually what you had in mind.
A more common approach is to decide which of A and B I endorse, and to assert that the one I endorse is his actual preference. E.g., if I endorse choosing to live over choosing to die, then I endorse B, and I therefore assert that B is his actual preference. But this is not emotionally satisfying when I say it baldly like that. Fortunately, there are all kinds of ways to conceal the question-begging nature of this approach, even from oneself.
Not only is “actual preferences” ill-defined, but so is “accurately represent.” So let me try and operationalize this a bit.
We have someone with a set of preferences that turn out to be mutually exclusive in the world they live in.
We can in principle create a procedure for sorting their preferences into categories such that each preference falls into at least one category and all the preferences in a category can (at least in principle) be realized in that world at the same time.
So suppose we’ve done this, and it turns out they have two categories A and B, where A includes those preferences Cato describes as “a fit of melancholy.”
I would say that their “actual” preferences = (A + B). It’s not realizable in the world, but it’s nevertheless their preference. So your question can be restated: does A or B more accurately represent (A + B)?
There doesn’t seem to be any nonarbitrary way to measure the extent of A, B, and (A+B) to determine this directly. I mean, what would you measure? The amount of brain matter devoted to representing all three? The number of lines of code required to represent them in some suitably powerful language?
One common approach is to look at their revealed preferences as demonstrated by the choices they make. Given an A-satisfying and a B-satisfying choice that are otherwise equivalent (and constructing such an exercise is left as an exercise to the class), which do they choose? This is tricky in this case, since the whole premise here is that their revealed preferences are inconsistent over time, but you could in principle measure their revealed preferences at multiple different times and weight the results accordingly (assuming for simplicity that all preference-moments are identical in weight).
When you were done doing all of that, you’d know whether A > B, B>A, or A=B.
It’s not in the least clear to me what good knowing that would do you. I suspect that this sort of analysis is not actually what you had in mind.
A more common approach is to decide which of A and B I endorse, and to assert that the one I endorse is his actual preference. E.g., if I endorse choosing to live over choosing to die, then I endorse B, and I therefore assert that B is his actual preference. But this is not emotionally satisfying when I say it baldly like that. Fortunately, there are all kinds of ways to conceal the question-begging nature of this approach, even from oneself.