It seems to me that ANY moral theory is, at its root, emotive. A utilitarian in the form of “do utile things!” decides that maximizing utility feels good, and so is moral. In other words, the argument for the basic axiom of utilitarianism is “Yay utility!”
A non-emotive utilitarianism, or any consequentialist theory, could never go beyond “A implies B.” That is, if people do A, the result they will get is B. Without “Yay B!” this is not an argument for doing A.
If I am moved by a should-argument to an x-ism, then “Yay x-ism!” is what being moved by that argument feels like, not an additional part of the argument.
Otherwise, aren’t you’re the tortoise demanding “Yay (Yay X!)!”, “Yay (Yay (Yay X!)!)!” and so on?
I tend to agree with mwengler—value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures.
There is a risk we might be disagreeing because we are working with different interpretations of emotion.
Imagine a work of fiction involving no sentient beings, not even metaphorically—can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?
You are right, the only reason I can think for doing anything is because I feel like it, because I want to, which is emotional. In some more detail, think this includes doing things to avoid things I am afraid of or that I find painful, also emotional. Certainly pleasure seeking is emotional. I attribute playing sudoku to my feeling of pleasure of having my mind occupied.
If you come up with something like a Kantian categorical imperative, I will tell you I don’t follow categorical imperatives because I don’t feel like it, and nothing in the real world of “is” seems to break when I act that way. And it does suggest to me that those who do follow a categorical imperative do it because they feel like it, the feeling of logical consistency or superiority appeals to them.
Please let me know what OTHER reasons, non-emotional reasons, there are to do something.
There’s no logical reason why any given entity, human or otherwise, would have to be motivated by emotion. You may be over generalising from the single example of yourself. Also, you would have to believe that highly logical, vulcan-like people are motivated by some emotion they don’t show.
There’s no logical reason why any given entity, human or otherwise, would have to be motivated by emotion.
There’s a trivial “logical” reason why this could be the case—tautology—if the person you are talking to defines “emotion” as “those mental states which directly motivate behaviour”. Which seems like a perfectly good starting place to me.
In other words, this conversation will likely go nowhere until you taboo “emotion” so we can know what work that word does for you.
It seems to me that ANY moral theory is, at its root, emotive. A utilitarian in the form of “do utile things!” decides that maximizing utility feels good, and so is moral. In other words, the argument for the basic axiom of utilitarianism is “Yay utility!”
A non-emotive utilitarianism, or any consequentialist theory, could never go beyond “A implies B.” That is, if people do A, the result they will get is B. Without “Yay B!” this is not an argument for doing A.
Am I missing something?
If I am moved by a should-argument to an x-ism, then “Yay x-ism!” is what being moved by that argument feels like, not an additional part of the argument.
Otherwise, aren’t you’re the tortoise demanding “Yay (Yay X!)!”, “Yay (Yay (Yay X!)!)!” and so on?
You seem to be assuming, without argument, that emotion is the only motivation for doing anything.
I tend to agree with mwengler—value is not a property of physical objects or world states, but a property of an observer having unequal preferences for different possible futures.
There is a risk we might be disagreeing because we are working with different interpretations of emotion.
Imagine a work of fiction involving no sentient beings, not even metaphorically—can you possibly write a happy or tragic ending? Is it not first when you introduce some form of intelligence with preferences that destruction becomes bad and serenity good? And are not preferences for this over that the same as emotion?
You are right, the only reason I can think for doing anything is because I feel like it, because I want to, which is emotional. In some more detail, think this includes doing things to avoid things I am afraid of or that I find painful, also emotional. Certainly pleasure seeking is emotional. I attribute playing sudoku to my feeling of pleasure of having my mind occupied.
If you come up with something like a Kantian categorical imperative, I will tell you I don’t follow categorical imperatives because I don’t feel like it, and nothing in the real world of “is” seems to break when I act that way. And it does suggest to me that those who do follow a categorical imperative do it because they feel like it, the feeling of logical consistency or superiority appeals to them.
Please let me know what OTHER reasons, non-emotional reasons, there are to do something.
There’s no logical reason why any given entity, human or otherwise, would have to be motivated by emotion. You may be over generalising from the single example of yourself. Also, you would have to believe that highly logical, vulcan-like people are motivated by some emotion they don’t show.
There’s a trivial “logical” reason why this could be the case—tautology—if the person you are talking to defines “emotion” as “those mental states which directly motivate behaviour”. Which seems like a perfectly good starting place to me.
In other words, this conversation will likely go nowhere until you taboo “emotion” so we can know what work that word does for you.
It wasn’t my initial claim, and I have already pointed that seemingly unemotional people motivate themselves somehow.