The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible.
A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible. A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
I want a system that supports core intuitions. A consistent system can help to disambiguate intuitions.
And how do you decide which intuitions are “core intuitions”?
There’s a high degree of agreement about them. They seem particularly clear to me.
Can you give some of those? I’d be curious what such a list would look like.
eg., Murder, stealing
So what makes an intuition a core intuition and how did you determine that your intuitions about murder and stealing are core?
That’s a pretty short list.