Hum, no, it means that I don’t use raw consequentialism/utilitarianism as my ethical framework. I consider them to be theoretically valid, but not directly usable by humans who are unable to forsee all the consequences of their act, and who have so many biases.
So while I can use consequentialism to reason on meta-ethics, and even to amend my ethical rules, I don’t use it to take ethical decisions. I don’t trust myself to do that. And in my current framework of ethics, sheltering Jews from Hitler is something that should be done, regardless of the risk taken by myself.
(nods) There’s a reason I didn’t say anything about what you would do, but rather about what I would.
That said, I’m curious… can you clarify what you mean by “amend my ethical rules” here?
For example… so, OK, at the moment your ethical rules include the rule that “sheltering Jews from Hitler is something that should be done regardless of the risk [to you]”. Let’s assume for simplicity that it also includes no rules that conflict with that rule. It follows that, given the choice to shelter a Jew or not, you shelter… that’s straightforward; no evaluation of consequences is necessary.
Now, suppose now you come to believe that your shelter has been compromised and is under Nazi observation, such that any Jew you shelter will be killed. It seems to follow straightforwardly that you still shelter the Jew, because actually prolonging the Jew’s life is irrelevant… that’s a mere consequence.
But your reference to amending your ethical rules suggests that it might not be that simple. Might you, in this hypothetical example, instead “use consequentialism to [...] amend [your] ethical rules” so that they no longer motivate you to shelter the Jew in situations where doing so leads to the Jew’s death?
Well, we’ll start playing on words right now… “shelter” according to freedictionary means “a. Something that provides cover or protection, as from the weather.” If Nazi are observing my home, then it’s no longer a “shelter”, but it becomes a “trap”, in the meaning I was giving to “shelter”.
But the ethical rule is not actually “shelter Jews from Hitler”, but more “protect if you can people who are threatened of something horrible while they didn’t do anything to deserve it”, or something like that. It’s not even explicit in terms of words, I’m not trying to write a legal contract with myself.
And of course I’ll need to evaluate some consequences of my acts in order to chose what to do. I don’t say I don’t use consequentialism at all, just that I don’t use it “raw”. I won’t re-evaluate the consequences of sheltering Jews against Hitler (or anyone pursued by a hateful dictatorship) in terms of risk for myself and of benefit for them when put in front of the choice. Partly because I know that then it’ll be easy to rationalize a reason to not take the risks (“but if I don’t I’ll be able to save more later on” or whatever).
What I was referring to by “amend my ethical rules” is that I do theoretical reasoning about what should be my ethical rules, and doing so I can change them. But that I refuse to do during actual pressure of an actual dilemma, because I don’t trust myself to perform well enough under pressure, I know rationalizing is easy, and I know even that just doing maths under pressure leads to a higher error rate.
Somehow reminds me that article of the French Constitution saying that the Constitution cannot be changed during a war—a clear reference to WW2 and the way Pétain changed the Constitution, but I found it very interesting as a more general guideline : don’t change your most fundamental rules while under heavy pressure.
But we somehow drifted a lot from the initial topic, sorry for the noise ;)
I certainly agree that, given the choice, I’d rather have the opportunity to think carefully about what I ought to do in a situation. But while I’m aware that performing analysis under pressure is error-prone, I’m also aware that applying rules derived from one situation to a different situation without analysis is error-prone. In the real world, it’s sometimes a choice among imperfect options.
Partly because I know that then it’ll be easy to rationalize...But that I refuse to do during actual pressure of an actual dilemma, because I don’t trust myself to perform well enough under pressure
My glasses distort light. A person with perfect vision wouldn’t wear them. They are calibrated to counterbalance my deficiencies.
What is the ideal moral system that someone who didn’t rationalize would use?
Hum, no, it means that I don’t use raw consequentialism/utilitarianism as my ethical framework. I consider them to be theoretically valid, but not directly usable by humans who are unable to forsee all the consequences of their act, and who have so many biases.
So while I can use consequentialism to reason on meta-ethics, and even to amend my ethical rules, I don’t use it to take ethical decisions. I don’t trust myself to do that. And in my current framework of ethics, sheltering Jews from Hitler is something that should be done, regardless of the risk taken by myself.
(nods) There’s a reason I didn’t say anything about what you would do, but rather about what I would.
That said, I’m curious… can you clarify what you mean by “amend my ethical rules” here?
For example… so, OK, at the moment your ethical rules include the rule that “sheltering Jews from Hitler is something that should be done regardless of the risk [to you]”. Let’s assume for simplicity that it also includes no rules that conflict with that rule. It follows that, given the choice to shelter a Jew or not, you shelter… that’s straightforward; no evaluation of consequences is necessary.
Now, suppose now you come to believe that your shelter has been compromised and is under Nazi observation, such that any Jew you shelter will be killed. It seems to follow straightforwardly that you still shelter the Jew, because actually prolonging the Jew’s life is irrelevant… that’s a mere consequence.
But your reference to amending your ethical rules suggests that it might not be that simple. Might you, in this hypothetical example, instead “use consequentialism to [...] amend [your] ethical rules” so that they no longer motivate you to shelter the Jew in situations where doing so leads to the Jew’s death?
Well, we’ll start playing on words right now… “shelter” according to freedictionary means “a. Something that provides cover or protection, as from the weather.” If Nazi are observing my home, then it’s no longer a “shelter”, but it becomes a “trap”, in the meaning I was giving to “shelter”.
But the ethical rule is not actually “shelter Jews from Hitler”, but more “protect if you can people who are threatened of something horrible while they didn’t do anything to deserve it”, or something like that. It’s not even explicit in terms of words, I’m not trying to write a legal contract with myself.
And of course I’ll need to evaluate some consequences of my acts in order to chose what to do. I don’t say I don’t use consequentialism at all, just that I don’t use it “raw”. I won’t re-evaluate the consequences of sheltering Jews against Hitler (or anyone pursued by a hateful dictatorship) in terms of risk for myself and of benefit for them when put in front of the choice. Partly because I know that then it’ll be easy to rationalize a reason to not take the risks (“but if I don’t I’ll be able to save more later on” or whatever).
What I was referring to by “amend my ethical rules” is that I do theoretical reasoning about what should be my ethical rules, and doing so I can change them. But that I refuse to do during actual pressure of an actual dilemma, because I don’t trust myself to perform well enough under pressure, I know rationalizing is easy, and I know even that just doing maths under pressure leads to a higher error rate.
Somehow reminds me that article of the French Constitution saying that the Constitution cannot be changed during a war—a clear reference to WW2 and the way Pétain changed the Constitution, but I found it very interesting as a more general guideline : don’t change your most fundamental rules while under heavy pressure.
But we somehow drifted a lot from the initial topic, sorry for the noise ;)
I certainly agree that, given the choice, I’d rather have the opportunity to think carefully about what I ought to do in a situation. But while I’m aware that performing analysis under pressure is error-prone, I’m also aware that applying rules derived from one situation to a different situation without analysis is error-prone. In the real world, it’s sometimes a choice among imperfect options.
My glasses distort light. A person with perfect vision wouldn’t wear them. They are calibrated to counterbalance my deficiencies.
What is the ideal moral system that someone who didn’t rationalize would use?