There is no reason that I should base my ethical framework around anticipating my future ethical intuitions. I anticipate my future ethical intuitions to be flawed, inconsistent and vulnerable to money pumping (or ‘rightness’ pumping, as the case may be). You’re making a general normative assertion (should, ‘you need a damn good explanation’) about other people’s ethics and I reject it.
It doesn’t look to me like you’re interested in descriptive or normative ethics as the fields are usually conceived.
My comments are making direct assertions regarding normative ethics, clearly indicating interest. I just disagree with you and reject your ‘should’ unambiguously. My objections are similar to the other comments here.
Incidently, I agree with the title of your post, just not your prescription.
So you’re basically just saying that you disagree with the conclusion of the post? I guess I thought you were saying something more complicated since usually when people disagree with conclusions they either try to show that the argument is invalid or that one of the premises is untrue. Would you like to do either of those things?
(Reading this to myself, it sounds sarcastic. But I’m sincere.)
We might have trouble communicating across an two way inferential barrier as we make significantly different assumptions. But we are both being sincere so I’ll try to give an outline to what I am saying:
I expect my future ethical intuitions to be reflectively inconsistent when multiplied out.
Reflectively inconsistent ethical systems, when followed, will have consequences that are suboptimal according to any given preferences over possible states of the universe.
Wedrifid-would-want to have a reflective ethical system.
Wedrifid should do things that wedrifid-would-want, a priori. (Tangentially, everyone else should do what wedrifid-would-want too. It so happens that following their own volition is a big part of wedrifid-would-want but the very nature of should makes all should claims quite presumptive.)
Wedrifid should not base his ethical theories around predicting future ethical intuitions.
Allow me to replace ‘ethical intuitions’ with, lets say, “Coherent Extrapolated Ethical Volition”. That may make me more comfortable getting closer to where I think your position is. But even then I wouldn’t want to match my ethical judgments now with predicted future ethical intuitions. This is a somewhat analogous to the discussion in A Much Better Life?. My ethical theories should match my (coherent) intuitions now, not the intuitions of that other guy called wedrifid who is in the future.
I should add: Something we may agree on is that we can use normal techniques of rational inquiry to better elicit what our Present-time Coherent Extrapolated Ethical Volition is. Since the process of acquiring evidence does take time our effective positions may be similar. We may be in, as pjebey would put it, ‘Violent Agreement’. ‘Should’ claims do that sometimes. :)
There is no reason that I should base my ethical framework around anticipating my future ethical intuitions. I anticipate my future ethical intuitions to be flawed, inconsistent and vulnerable to money pumping (or ‘rightness’ pumping, as the case may be). You’re making a general normative assertion (should, ‘you need a damn good explanation’) about other people’s ethics and I reject it.
It doesn’t look to me like you’re interested in descriptive or normative ethics as the fields are usually conceived. That is fine, of course.
My comments are making direct assertions regarding normative ethics, clearly indicating interest. I just disagree with you and reject your ‘should’ unambiguously. My objections are similar to the other comments here.
Incidently, I agree with the title of your post, just not your prescription.
So you’re basically just saying that you disagree with the conclusion of the post? I guess I thought you were saying something more complicated since usually when people disagree with conclusions they either try to show that the argument is invalid or that one of the premises is untrue. Would you like to do either of those things?
(Reading this to myself, it sounds sarcastic. But I’m sincere.)
Hi Jack,
We might have trouble communicating across an two way inferential barrier as we make significantly different assumptions. But we are both being sincere so I’ll try to give an outline to what I am saying:
I expect my future ethical intuitions to be reflectively inconsistent when multiplied out.
Reflectively inconsistent ethical systems, when followed, will have consequences that are suboptimal according to any given preferences over possible states of the universe.
Wedrifid-would-want to have a reflective ethical system.
Wedrifid should do things that wedrifid-would-want, a priori. (Tangentially, everyone else should do what wedrifid-would-want too. It so happens that following their own volition is a big part of wedrifid-would-want but the very nature of should makes all should claims quite presumptive.)
Wedrifid should not base his ethical theories around predicting future ethical intuitions.
Allow me to replace ‘ethical intuitions’ with, lets say, “Coherent Extrapolated Ethical Volition”. That may make me more comfortable getting closer to where I think your position is. But even then I wouldn’t want to match my ethical judgments now with predicted future ethical intuitions. This is a somewhat analogous to the discussion in A Much Better Life?. My ethical theories should match my (coherent) intuitions now, not the intuitions of that other guy called wedrifid who is in the future.
I should add: Something we may agree on is that we can use normal techniques of rational inquiry to better elicit what our Present-time Coherent Extrapolated Ethical Volition is. Since the process of acquiring evidence does take time our effective positions may be similar. We may be in, as pjebey would put it, ‘Violent Agreement’. ‘Should’ claims do that sometimes. :)