judging the moral worth of others actions is something a moral theory should enable one to do. It’s not something you can just give up on.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
judging the moral worth of others actions is something a moral theory should enable one to do.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense?
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question “what is B morally responsible for” does not answer the question “what should A do”, which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that’s not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
“what should A do”, which is the only question A is interested in.
I don’t see how that follows from consequentialism or anything else.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don’t see things this way.
I don’t see how that follows from consequentialism or anything else.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort.
It doesn’t follow from that that you have no interest in praise and blame.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
Isn’t A interested in the actions of B and C that impinge on A?
Isn’t A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything. 2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
“actions of B and C that impinge on A” is a subset of 1) and “giving praise and blame” is a subset of 2). “Influencing the actions of B and C” is also a subset of 2).
1) The state of the world. This is important information for deciding anything.
2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
It doesn’t follow from that that you have no interest in praise and blame.
Yes, and it doesn’t follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it’s just not the same as I use for myself.
Isn’t A interested in the actions of B and C that impinge on A?
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
That’s not a flaw in consequentialism. It’s a flaw in judging other people’s morality.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
judging the moral worth of others actions is something a moral theory should enable one to do. It’s not something you can just give up on.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question “what is B morally responsible for” does not answer the question “what should A do”, which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that’s not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
Indeed. I would consider it a given that you should model the objects in your world if you want to predict and influence the world.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
I don’t see how that follows from consequentialism or anything else.
Then it is limited.
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don’t see things this way.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
It doesn’t follow from that that you have no interest in praise and blame.
Isn’t A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything.
2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
“actions of B and C that impinge on A” is a subset of 1) and “giving praise and blame” is a subset of 2). “Influencing the actions of B and C” is also a subset of 2).
1) The state of the world. This is important information for deciding anything. 2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
Or, briefly “The Union of A and not-A”
or, more briefly still:
“Everything”.
Yes, and it doesn’t follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it’s just not the same as I use for myself.
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
Your metaethics treats everyone as acting but not acted on?