I think I agree with Eugine_Nier that it isn’t a moral theory to be able to draw conclusions. One doesn’t need to commit to any ethical or meta-ethical principles to notice that Clippy’s preferences will be met better if Clippy creates some paperclips.
At the level of abstraction we are talking in now, moral theories exist to tell us what preferences to have, and meta-ethical theories tell us what kinds of moral theories are worth considering.
It sounds to me like you only think a person has a moral theory then the moral theory has them.
moral theories exist to tell us what preferences to have,
For you, under your moral theories. Not for me. I’m happy to have theories that tell me what moral values I do have, and what moral values other people have.
Obviously not—but it isn’t your moral theory that tells you how Clippy will maximize its preferences.
Alice the consequentialist and Bob the deontologist disagree about moral reasoning. But Bob does not need to become a consequentialist to predict what Alice will maximize, and vice versa.
What do you want to call those kinds of theories?
Reasoning? More generally, thinking (and caring about) the consequences of actions is not limited to consequentialists. A competent deontologist knows that pointing guns at people and pulling the trigger tends to cause murder—that’s why she tends not to do that.
moral theories exist to tell us what preferences to have,
For you, under your moral theories. Not for me.
I should be working now, but I don’t want to. So I’m here, relaxing and discussing philosophy. But I am committing a minor wrong in that I am acting on a preference that is inconsistent with my moral obligation to support my family (as I see my obligations). Does that type of inconsistency between preference and right action never happen to you?
Those aren’t accurate statements of the kinds of moral theories I was speaking of.
I gave the example:
That’s not an imperative, it’s an identification of the relationship between different values, in this case that A,B,C imply D.
Ok, that’s not a moral theory unless you’re sneaking in the statements I made in the parent as connotations.
To me, a theory that identifies a moral value implied by other moral values would count as a moral theory.
What kind of theory do you want to call it?
I think I agree with Eugine_Nier that it isn’t a moral theory to be able to draw conclusions. One doesn’t need to commit to any ethical or meta-ethical principles to notice that Clippy’s preferences will be met better if Clippy creates some paperclips.
At the level of abstraction we are talking in now, moral theories exist to tell us what preferences to have, and meta-ethical theories tell us what kinds of moral theories are worth considering.
Does one need to commit to a theory to have one?
It sounds to me like you only think a person has a moral theory then the moral theory has them.
For you, under your moral theories. Not for me. I’m happy to have theories that tell me what moral values I do have, and what moral values other people have.
What do you want to call those kinds of theories?
Obviously not—but it isn’t your moral theory that tells you how Clippy will maximize its preferences.
Alice the consequentialist and Bob the deontologist disagree about moral reasoning. But Bob does not need to become a consequentialist to predict what Alice will maximize, and vice versa.
Reasoning? More generally, thinking (and caring about) the consequences of actions is not limited to consequentialists. A competent deontologist knows that pointing guns at people and pulling the trigger tends to cause murder—that’s why she tends not to do that.
I should be working now, but I don’t want to. So I’m here, relaxing and discussing philosophy. But I am committing a minor wrong in that I am acting on a preference that is inconsistent with my moral obligation to support my family (as I see my obligations). Does that type of inconsistency between preference and right action never happen to you?