Thanks for writing this. I’ve started to shift away from utilitarianism to something that is more a combination of utilitarianism and contract-theory which the utilitarianism being about being altruistic and contract-theory being about building co-operation. I haven’t thought out the specifics of how to make this work in detail yet, only the vague outline.
I guess the way you’ve justified focusing on co-operation in the above seems to be in terms of consequences, however people are often reluctant to co-operate with people who will use consequential justifications to break co-operation, so I think it’s necessary to place some intrinsic value on co-operation.
I understand the direction, but it’s VERY hard to mix the two without it being the case that the contractualism is just a part of consequentialism. Being known as a promise-keeper is instrumentally desirable, and in VERY MANY cases leads to less-short-term-optimal behaviors. But this is just longer-term consequentialist optimization.
And, of course, there can be a divergence between your public and private beliefs. It’s quite likely that, even if you’re a pure consequentialist in truth (and acknowledge the instrumental value of contracts and the heuristic/calculation value of deontological-sounding rules), you’ll get BETTER consequences if you signal extra strength to the less-flexible aspects of your beliefs.
I already tried to address this, although maybe I could have been clearer. If you are just calculating what is the utility from defecting, what is the utility from losing the opportunity and co-operating and building/maintaining trust, then people will see you as manipulative and not trust you. So you need to value co-operation more than that.
But then, maybe your point is that you can include this in the utility calculation too? If so, it would be useful for you to confirm.
you can include this in the utility calculation too?
Exactly. Not only can, but must. Naive consequentialism (looking at short-term easily-considered factors ONLY and ignoring the complex and longer-term impact) is just dumb. Sane consequentialism includes all changes to the world conditional on an action. In many (perhaps the vast majority) cases, the impact on others’ trust and ease of modeling you is much bigger than the immediate visible consequence of an action.
And, since more distant and compounded consequences are MUCH harder to calculate, it’s quite often better to follow the heuristics of a deontological ruleset rather than trying to calculate everything. It’s still consequentialism under the covers, and there may be a few cases in one’s life that it IS better (for one’s terminal goals) to break a rule or oath, but those are extremely rare. Rare enough that they may mostly be of interest to intellectuals and researchers trying to find universal mechanisms, rather than just living good lives.
This is what rule and virtue (and global) consequentialism are for. You don’t need to be calculating all the time, and as you point out, that might be counterproductive. But every now and then, you should (re)evaluate what rules to follow and what kind of character you want to cultivate.
And I don’t mean this as saying rule or virtue consequentialism is the correct moral theory; I just mean that you should use rules and virtues, as a practical matter, since it leads to better consequences.
Sometimes you will want to break a rule. This can be okay, but should not be taken lightly, and it would be better if your rule included its exceptions. A rule can be something like a very strong prior towards/against certain kinds of acts.
Thanks for writing this. I’ve started to shift away from utilitarianism to something that is more a combination of utilitarianism and contract-theory which the utilitarianism being about being altruistic and contract-theory being about building co-operation. I haven’t thought out the specifics of how to make this work in detail yet, only the vague outline.
I guess the way you’ve justified focusing on co-operation in the above seems to be in terms of consequences, however people are often reluctant to co-operate with people who will use consequential justifications to break co-operation, so I think it’s necessary to place some intrinsic value on co-operation.
I understand the direction, but it’s VERY hard to mix the two without it being the case that the contractualism is just a part of consequentialism. Being known as a promise-keeper is instrumentally desirable, and in VERY MANY cases leads to less-short-term-optimal behaviors. But this is just longer-term consequentialist optimization.
And, of course, there can be a divergence between your public and private beliefs. It’s quite likely that, even if you’re a pure consequentialist in truth (and acknowledge the instrumental value of contracts and the heuristic/calculation value of deontological-sounding rules), you’ll get BETTER consequences if you signal extra strength to the less-flexible aspects of your beliefs.
I already tried to address this, although maybe I could have been clearer. If you are just calculating what is the utility from defecting, what is the utility from losing the opportunity and co-operating and building/maintaining trust, then people will see you as manipulative and not trust you. So you need to value co-operation more than that.
But then, maybe your point is that you can include this in the utility calculation too? If so, it would be useful for you to confirm.
Exactly. Not only can, but must. Naive consequentialism (looking at short-term easily-considered factors ONLY and ignoring the complex and longer-term impact) is just dumb. Sane consequentialism includes all changes to the world conditional on an action. In many (perhaps the vast majority) cases, the impact on others’ trust and ease of modeling you is much bigger than the immediate visible consequence of an action.
And, since more distant and compounded consequences are MUCH harder to calculate, it’s quite often better to follow the heuristics of a deontological ruleset rather than trying to calculate everything. It’s still consequentialism under the covers, and there may be a few cases in one’s life that it IS better (for one’s terminal goals) to break a rule or oath, but those are extremely rare. Rare enough that they may mostly be of interest to intellectuals and researchers trying to find universal mechanisms, rather than just living good lives.
This is what rule and virtue (and global) consequentialism are for. You don’t need to be calculating all the time, and as you point out, that might be counterproductive. But every now and then, you should (re)evaluate what rules to follow and what kind of character you want to cultivate.
And I don’t mean this as saying rule or virtue consequentialism is the correct moral theory; I just mean that you should use rules and virtues, as a practical matter, since it leads to better consequences.
Sometimes you will want to break a rule. This can be okay, but should not be taken lightly, and it would be better if your rule included its exceptions. A rule can be something like a very strong prior towards/against certain kinds of acts.