No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn’t strike me as a misguided thing to be thinking about.
Well, there are two things I have to say in response to that:
Timeless decision-making is a decision algorithm; you can use it to maximize any utility function you want. In other words, it’s instrumental, not terminal. So it’s hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
Timeless decision-making is still based on your estimated degree of similarity to other agents on the playing field. I’ll only cooperate in the one-shot Prisoner’s Dilemma if I suspect my decision and my opponent’s are logically connected. So even if you advocate timeless decision-making, “cooperate in PD-like situations” is still not going to be a universal rule like the Golden Rule.
I changed my mind midway through this post. Hopefully it still makes sense… I started disagreeing with you based on the first two thoughts that come to mind, but I’m now beginning to think you may be right.
So it’s hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
I.
This statement doesn’t really fit with the philosophy of morality. (At least as I read it.)
Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have “No murder” as a terminal value, but that’s different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don’t commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.
Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other’s terminal values. It’s a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.
II.
Discussions of morality focus on what people “should” do and what people “should” think, etc. The general idea of terminal values is that you have them and they don’t change in response to other considerations. They’re the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There’s no point to discussing what kind of terminal values people “should” have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.
III.
The psychological conditions that cause people to become immoral by most other people’s standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.
Sociopaths are people who don’t experience empathy or remorse. Psychopaths are people who don’t experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality… But that’s not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.
Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me… but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.
Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other’s terminal value
Instrumental values can clash too. The instrumental-terminal axis is pretty well orthogonal to the morally relevant/irrelevant axis.
Well, there are two things I have to say in response to that:
Timeless decision-making is a decision algorithm; you can use it to maximize any utility function you want. In other words, it’s instrumental, not terminal. So it’s hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
Timeless decision-making is still based on your estimated degree of similarity to other agents on the playing field. I’ll only cooperate in the one-shot Prisoner’s Dilemma if I suspect my decision and my opponent’s are logically connected. So even if you advocate timeless decision-making, “cooperate in PD-like situations” is still not going to be a universal rule like the Golden Rule.
I changed my mind midway through this post. Hopefully it still makes sense… I started disagreeing with you based on the first two thoughts that come to mind, but I’m now beginning to think you may be right.
I.
This statement doesn’t really fit with the philosophy of morality. (At least as I read it.)
Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have “No murder” as a terminal value, but that’s different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don’t commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.
Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other’s terminal values. It’s a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.
II.
Discussions of morality focus on what people “should” do and what people “should” think, etc. The general idea of terminal values is that you have them and they don’t change in response to other considerations. They’re the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There’s no point to discussing what kind of terminal values people “should” have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.
III.
The psychological conditions that cause people to become immoral by most other people’s standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.
Sociopaths are people who don’t experience empathy or remorse. Psychopaths are people who don’t experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality… But that’s not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.
Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me… but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.
Instrumental values can clash too. The instrumental-terminal axis is pretty well orthogonal to the morally relevant/irrelevant axis.