Thanks for responding to my summary attempt. I agree with Robin that it is important to be able to clearly and succinctly express your main position, as only then can it be subject to proper criticism to see how well it holds up. In one way, I’m glad that you didn’t like my attempted summary as I think the position therein is false, but it does mean that we should keep looking for a neat summary. You currently have:
‘I should X’ means that X answers the question, “What will save my people? How can we all have more fun? How can we get more control over our own lives? What’s the funniest jokes we can tell? …”
But I’m not clear where the particular question is supposed to come from. I understand that you are trying to make it a fixed question in order to avoid deliberate preference change or self-fulling questions. So lets say that for each person P, there is a specific question Q_P such that:
For a person P, ‘I should X’, means that X answers the question Q_P.
Now how is Q_P generated? Is it what P would want were she given access to all the best empirical and moral arguments (what I called being fully informed)? If so, do we have to time index the judgment as well? i.e. if P’s preferences change at some late time T1, then did the person mean something different by ‘I should X’ before and after T1 , or was the person just incorrect at one of those times? What if the change is just through acquiring better information (empirical or moral)?
Thanks for responding to my summary attempt. I agree with Robin that it is important to be able to clearly and succinctly express your main position, as only then can it be subject to proper criticism to see how well it holds up. In one way, I’m glad that you didn’t like my attempted summary as I think the position therein is false, but it does mean that we should keep looking for a neat summary. You currently have:
‘I should X’ means that X answers the question, “What will save my people? How can we all have more fun? How can we get more control over our own lives? What’s the funniest jokes we can tell? …”
But I’m not clear where the particular question is supposed to come from. I understand that you are trying to make it a fixed question in order to avoid deliberate preference change or self-fulling questions. So lets say that for each person P, there is a specific question Q_P such that:
For a person P, ‘I should X’, means that X answers the question Q_P.
Now how is Q_P generated? Is it what P would want were she given access to all the best empirical and moral arguments (what I called being fully informed)? If so, do we have to time index the judgment as well? i.e. if P’s preferences change at some late time T1, then did the person mean something different by ‘I should X’ before and after T1 , or was the person just incorrect at one of those times? What if the change is just through acquiring better information (empirical or moral)?