I personally would prefer to use the word “theory” to mean “a scientific theory that is, by definition, falsifiable”. But it’s not a strong preference; I merely think that it helps reduce confusion. As long as we make sure to define what you mean by the word ahead of time, we can use the word “theory” in the vernacular sense, as well.
Regarding moral theories, I have to admit that my understanding of them is somewhat shaky. Still, if moral theories are completely unfalsifiable, then how do we compare them to discover which is better ? And if we can’t determine which moral theories are better than others, what’s the point in talking about them at all ?
I said earlier that Utilitarianism is more like an algorithm than like a scientific theory; the reason I said that is because Utilitarianism doesn’t tell you how to obtain the utility function. However, we can still probably say that, given a utility function, Utilitarianism is better than something like Divine Command—or can we ? If we can, then we are implicitly looking at the results of the application of both of these theories throughout history, and evaluating them according to some criteria, which looks a lot like falsifiability. If we cannot, then what are those moral theories for ?
It should be noted that Utilitarianism(Ethical Theory) states that the outputs of Utilitarianism(algorithm) constitute morality.
Oh… so does Utilitarianism(Ethical Theory) actually prescribe a specific utility function ? If so, how is the function derived ? As I said, my understanding of moral theories is a bit shaky, sorry about that.
When Utilitarianism was proposed, Mill/Bentham identified it as basically “pleasure good / pain bad”. Since then, Utilitarianism has pretty much become a family of theories, largely differentiated by their conceptions of the good.
One common factor of ethical theories called “Utilitarianism” is that they tend to be agent-neutral; thus, one would not talk about “an agent’s utility function”, but “overall net utility” (a dubious concept).
“Consequentialism” only slightly more generally refers to a family of ethical theories that consider the consequences of actions to be the only consideration for morality.
Thanks, that clears things up. But, as you said, “overall net utility” is kind of a dubious concept. I suspect that no one had figured out a way yet to compute this utility function in a semi-objective way… is that right ?
I personally would prefer to use the word “theory” to mean “a scientific theory that is, by definition, falsifiable”
So would I. But it’s just an ambiguous word in English that means different things in different places. As I take it into the extremely foggy areas that also use the word “theory”, I’m going for something like “has explanatory power”.
Just a quick definition here: When people say moral theory, they mean the procedure(s) they use to generate their terminal values (i.e. the ends you are trying to achieve). Instrumental values (i.e. how to achieve your goals) are much less troublesome.
if moral theories are completely unfalsifiable, then how do we compare them to discover which is better ?
I’m not sure that the consensus here is that all moral theories are unfalsifiable (although I believe that is a fact about moral theories). If theories are unfalsifiable, then comparison from some “objective” position is conceptually problematic (which I expect is why politics is the mind-killer).
And if we can’t determine which moral theories are better than others, what’s the point in talking about them at all ?
We still make decisions, and I think we are right to say that the decisions are “moral decisions” because they have moral consequences. Thus, one reason to discuss moral theories is to determine [as a descriptive matter] what morality one follows, in some attempt to be internally consistent.
When people say moral theory, they mean the procedure(s) they use to generate their terminal values (i.e. the ends you are trying to achieve).
Understood, thanks.
I’m not sure that the consensus here is that all moral theories are unfalsifiable (although I believe that is a fact about moral theories).
Let’s go with what you believe, then, and if the consensus wants to disagree, they can chime in :-)
Thus, one reason to discuss moral theories is to determine [as a descriptive matter] what morality one follows, in some attempt to be internally consistent
Are you saying that moral theories are descriptive, and not prescriptive ? In this case, discussing moral theories is similar to discussing human psychology, or cognitive science, or possibly sociology. That makes sense to me, though I think that most people would disagree. But, again, if this is what you believe as well, then we are in agreement, and the consensus can chime in if it feels like arguing.
I personally would prefer to use the word “theory” to mean “a scientific theory that is, by definition, falsifiable”. But it’s not a strong preference; I merely think that it helps reduce confusion. As long as we make sure to define what you mean by the word ahead of time, we can use the word “theory” in the vernacular sense, as well.
Regarding moral theories, I have to admit that my understanding of them is somewhat shaky. Still, if moral theories are completely unfalsifiable, then how do we compare them to discover which is better ? And if we can’t determine which moral theories are better than others, what’s the point in talking about them at all ?
I said earlier that Utilitarianism is more like an algorithm than like a scientific theory; the reason I said that is because Utilitarianism doesn’t tell you how to obtain the utility function. However, we can still probably say that, given a utility function, Utilitarianism is better than something like Divine Command—or can we ? If we can, then we are implicitly looking at the results of the application of both of these theories throughout history, and evaluating them according to some criteria, which looks a lot like falsifiability. If we cannot, then what are those moral theories for ?
It should be noted that Utilitarianism(Ethical Theory) states that the outputs of Utilitarianism(algorithm) constitute morality.
Oh… so does Utilitarianism(Ethical Theory) actually prescribe a specific utility function ? If so, how is the function derived ? As I said, my understanding of moral theories is a bit shaky, sorry about that.
When Utilitarianism was proposed, Mill/Bentham identified it as basically “pleasure good / pain bad”. Since then, Utilitarianism has pretty much become a family of theories, largely differentiated by their conceptions of the good.
One common factor of ethical theories called “Utilitarianism” is that they tend to be agent-neutral; thus, one would not talk about “an agent’s utility function”, but “overall net utility” (a dubious concept).
“Consequentialism” only slightly more generally refers to a family of ethical theories that consider the consequences of actions to be the only consideration for morality.
Thanks, that clears things up. But, as you said, “overall net utility” is kind of a dubious concept. I suspect that no one had figured out a way yet to compute this utility function in a semi-objective way… is that right ?
So would I. But it’s just an ambiguous word in English that means different things in different places. As I take it into the extremely foggy areas that also use the word “theory”, I’m going for something like “has explanatory power”.
Just a quick definition here: When people say moral theory, they mean the procedure(s) they use to generate their terminal values (i.e. the ends you are trying to achieve). Instrumental values (i.e. how to achieve your goals) are much less troublesome.
I’m not sure that the consensus here is that all moral theories are unfalsifiable (although I believe that is a fact about moral theories). If theories are unfalsifiable, then comparison from some “objective” position is conceptually problematic (which I expect is why politics is the mind-killer).
We still make decisions, and I think we are right to say that the decisions are “moral decisions” because they have moral consequences. Thus, one reason to discuss moral theories is to determine [as a descriptive matter] what morality one follows, in some attempt to be internally consistent.
Understood, thanks.
Let’s go with what you believe, then, and if the consensus wants to disagree, they can chime in :-)
Are you saying that moral theories are descriptive, and not prescriptive ? In this case, discussing moral theories is similar to discussing human psychology, or cognitive science, or possibly sociology. That makes sense to me, though I think that most people would disagree. But, again, if this is what you believe as well, then we are in agreement, and the consensus can chime in if it feels like arguing.