The question in this thread was not “define Morality” but “explain how you determine which of “Killing innocent people is wrong barring extenuating circumstances” and “Killing innocent people is right barring extenuating circumstances” is morally right.”
(For people with other definitions of morality and / or other criteria for “rightness” besides morality, there may be other methods.)
The question was rather unhelpfully framed in Jublowskian terms of “observable consequences”. I think killing people is wrong because I don’t want to
be killed, and I don’t
want to Act on a Maxim I Would Not Wish to be Universal Law.
Because I’m trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you’re not interested in a productive discussion—you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I’m going to have to bow out of talking with you on this topic.
I believe murder is wrong. I believe you can figure that out if you don’t know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible.
A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
In this post: “How do you determine which one is accurate?”
In your response further down the thread: “I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]”
And then my post: “But you already have determined that one of them is accurate, right?”
That question was not one phrased in the way you object to, and yet you still haven’t answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like “I reason about which principle is more beneficial to me.”
The question in this thread was not “define Morality” but “explain how you determine which of “Killing innocent people is wrong barring extenuating circumstances” and “Killing innocent people is right barring extenuating circumstances” is morally right.”
(For people with other definitions of morality and / or other criteria for “rightness” besides morality, there may be other methods.)
The question was rather unhelpfully framed in Jublowskian terms of “observable consequences”. I think killing people is wrong because I don’t want to be killed, and I don’t want to Act on a Maxim I Would Not Wish to be Universal Law.
My name is getting all sorts of U’s and W’s these days.
If there was a person who decided they did want to be killed, would killing become “right”?
Does he want everyone to die? Does he want to kill them against their wished? Are multiple agents going to converge on that opinion?
What are the answers under each of those possible conditions (or, at least, the interesting ones)?
Why do you need me to tell you? Under normal circumstances the normal “murder is worng” answer will obtain—that’s the point.
Because I’m trying to have a discussion with you about your beliefs?
Looking at this I find it hard to avoid concluding that you’re not interested in a productive discussion—you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I’m going to have to bow out of talking with you on this topic.
I believe murder is wrong. I believe you can figure that out if you don’t know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
This seems problematic. If that’s the case, then your ethical system exists solely to support the bottom line. That’s just rationalizing, not actual thinking. Moreover, is doesn’t tell you anything helpful when people have conflicting intuitions or when you don’t have any strong intuition, and those are the generally interesting cases.
A system that could support any conclusion would be useless, and a system that couldn’t support the strongest and most common intuitions would be pretty incredible. A system that doesn’t suffer from quodlibet isn’t going to support both of a pair of contradictory intuitions. And that’s pretty well the only way of resolving such issues. The rightness and wrongness of feelings can’t help.
So to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don’t have intuitions?
I don’t think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one’s own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
I want a system that supports core intuitions. A consistent system can help to disambiguate intuitions.
And how do you decide which intuitions are “core intuitions”?
There’s a high degree of agreement about them. They seem particularly clear to me.
Can you give some of those? I’d be curious what such a list would look like.
eg., Murder, stealing
So what makes an intuition a core intuition and how did you determine that your intuitions about murder and stealing are core?
That’s a pretty short list.
In this post: “How do you determine which one is accurate?”
In your response further down the thread: “I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]”
And then my post: “But you already have determined that one of them is accurate, right?”
That question was not one phrased in the way you object to, and yet you still haven’t answered it.
Though, at this point it seems one can infer (from the parent post) that the answer is something like “I reason about which principle is more beneficial to me.”